# Quantitative analyst Interview Questions

# 6K

Quantitative Analyst interview questions shared by candidates### You bid for a coin. You're confident that the price of the coin is between 0 and 100, if your bid is greater than the price, you win and sell it to your friend at the price of 1.5 times price. what's your bid to max your profit?

17 Answers↳

I loved this question and want to renew this debate. What do you guys think about my two approaches to solve it: 1) If we can only play this game once AND our goal is to maximize profit (as the question states). I agree with above that expected value of a coin is 50. Given that we bid 51 to win auction and pocket 24. Problem is we only win if coin is (0:50) which gives us new expected value of 25, and so we lose. We can deduct this way all the way to zero bid. 2) Nothing beats little Monte Carlo experiment. I created a matrix of 100X1000000. Where 100 is the number of possible bids given certain price. 1M is the number of random uniformly distributed prices 0-100. Calculated expected gain at each bid level 0 to 100. I wish I could post a MATLAB graph here. It looks as downward facing 1/2 of parabola with max value of 0 and min of -25. Results: best gain of 0 achieved at 0 bid, worst average gain of -25 is at 100 bid. Comments appreciated! Less

↳

i had the longest argument with a friend on this. you cannot get a positive expected value no matter what you bet. if you bid $50, then you can discount the EVs if the value of the coin is 51-100 since that'll be 0 (you don't win the auction). if you bid $50 and the coin's worth $50, you sell for $75make $25. but if the coin's worth $0 you lose $50. keep comparing the extremities and you will see in almost all cases you will be losing more than you make...that's the best i can explain it. i had to use a spreadsheet to prove this to my friend. in order to get an EV of 0, you'd need to change the multiplier to 2. which makes sense. if X is your bid, your profit is (X/2) *1.5 - X. Less

↳

I don't think the argument for avg of 50 => bid 75 is accurate. You can't simply use the expected value of theprice to find its area of the profit curve (by integration, the area under the profit curve gives the expected profit. When you can do however, is find the mean of the expected profit, but you will end up with the same answer that the expected profit is negative everywhere except for 0. Less

### You observe a sample of measurements coming from a fixed length ruler. If the object is shorter than the ruler you observe the actual measurement. Otherwise you observe the length of the ruler. What would be a good estimator of the ruler length?

15 Answers↳

get rid of measurements that are equal to the ruler length. then take the average of the rest of the measurements that are within the range (0, ruler_length), ruler_length is 2 times this average value Less

↳

Assuming that the measurements are on a continuous scale, you would have a lot of mass on the point exactly corresponding to the ruler's length, so you could use something akin to a mode I'd imagine. Less

↳

The mode should work, right? The length of the ruler is likely to be the only specific value that shows up more than once in the data. Less

### How to measure 9 minutes using only a 4 minute and 7 minute hourglass

14 Answers↳

Start both timers together. When the 4 minute timer is done, flip it. 7 minute timer will have 3 minutes left. When the 7 minute timer is done, the 4 minute timer will have 1 minute left. Now you can count to 9 minutes by simply leaving the 4 minute to expire (1 min), flip it and let it expire (4 min), flip it again and let it expire (4 min). 1 + 4 + 4 = 9 Less

↳

The key is understanding that you will have to use the two hourglasses together. Since this problem could be asked in many ways using different values for the hourglasses and the total amount of time, it's more important to understand how you use the tools rather than memorize a specific example. The question is used to determine those who can apply their knowledge to solve problems vs. those who memorize answers "from the book". Start both timers. After four minutes, the four-minute timer will have expired and the seven-minute timer will have three minutes remaining. Flip the four minute timer over. After seven minutes, the seven-minute timer will have expired and the four-minute timer will still have one minute left. Flip the seven-minute timer over. After eight minutes, the four-minute timer will have expired for the second time. The seven-minute timer will have accumulated one minute after it's last flip. Flip over the seven-minute timer and when it expires nine minutes will have elapsed. For extra measure, you can always throw in something like, "assuming the timers can be flipped over nearly instantly..." Less

↳

1st timer 2nd timer time count 4 7...................start both timers 3 6................. 1min 2 5..................2mins 1 4..................3mins 0(flip) 3..................4mins completed 4 3..................4mins(assuming flip takes no time ideally) 3 2..................5mins 2 1..................6mins 1 0(flip)..........7mins 1 7..................7mins(again ideal flip) 0 6..................8mins(flip 2nd timer to count 1min) 0(as it is) 7..................9mins... Less

### There are 3 coins. One coin has heads on both sides, one coin has tails on both sides, the third one has head on one side and tail on the other side. Now I pick up one coin and toss. I get head. What is the chance that the coin I picked has heads on both sides?

14 Answers↳

2/3

↳

Because you have 1/3 chance to get double head coin and you will surely get head, 1/3 chance to get single head coin and then 1/2 chance to get head. So the probability of choosing double head coin and get head is 1/3, while choosing single head coin and get head is 1/6. Then, given you get head after tossing, then chance that you chose double head coin is (1/3)/(1/3+1/6) = 2/3 Less

↳

2 heads on double headed coin, 1 head on the other, P(head is coming from double headed) = 2/3 Less

### (4) You are given two Gaussian variables: X_1 and X_2 with means m_1, m_2 and variance v_1, v_2. Suppose you know the sum X_1 + X_2 is equal to n. What is the expected value of X_2?

13 Answers↳

For question 1, we recall that a positive definite symmetric matrix is one with all eigenvalues greater than 0. Thus, one method would be to calculate the eigenvalues of this matrix. We call the matrix M. We let the nxn matrix A be the matrix with all a's. We then note that: M = A - (a-1)I, where I is the identity matrix. This is a matrix polynomial of A (call this polynomial p(x) = x-(a-1)). By the spectral mapping theorem, we note that the eigenvalues of M are p(t), where t is an eigenvalue of A. Thus, it suffices to find the eigenvalues of A. Since A clearly has rank n-1, the eigenvalue 0 appears with multiplicity n-1. One can see that another eigenvalue is na, or this can be noted by deriving the matrix's characteristic polynomial. The trace is na. All other coefficients are 0 because the eigenvalue 0 has rank n-1 (to see this, note that the characteristic polynomial is (x-t)x^(n-1), where t is the other eigenvalue. Solving for the root of this polynomial gives that an eigenvalue is na. It follows that the eigenvalues of M are 1-a and an-a+1. Because they must be positive, we get that -1/(n-1)1. Less

↳

Question 2) seems incomplete. Do you mind showing the complete question? Thanks!

↳

For question 2, you can do as follows: Suppose that the first n is broken into n_1+n_2. We have two subproblems now: one starting with n_1 and S_1 is the tally we keep for this subproblem and one starting with n_2 and S_2 is the tally we keep for this subproblem. S=S_1+S_2+n_1n_2. If we use the hypothesis that someone above suggested (n(n-1)/2), strong induction yields the required result. Less

### 3) Poker. 26 red, 26 black. Take one every time, you can choose to guess whether it’s red. You have only one chance. If you are right, you get 1 dollar. What’s the strategy? And what’s the expected earn?

13 Answers↳

There is symmetry between red and black. Each time you pull a card it is equally likely to be red or black (assuming you haven't looked at the previous cards you pulled). Thus no matter when you guess you odds are 50% and the expected return should be 50 cents. Less

↳

The problem should be random draw card and dont put it back. Every draw you have one chance to guess. So the strategy is after first draw you random guess it's red. If correct you get one dollar, next draw you know there is less red than black. So you guess black on next draw. Else if first guess you are wrong, you guess red on next round. It's all about conditioning on the information you know from the previous drawings Less

↳

The problem statement is not very clear. What I understand is: you take one card at a time, you can choose to guess, or you can look at it. If you guess, then if it's red, you gain $1. And whatever the result, after the guess, game over. The answer is then $0.5, and under whatever strategy you use. Suppose there is x red y black, if you guess, your chance of winning is x/(x+y). If you don't, and look at the card, and flip the next one, your chance of winning is x/(x+y)*(x-1)/(x+y-1) + y/(x+y)*x/(x+y-1) = x/(x+y), which is the same. A rigorous proof should obviously done by induction and start from x,y=0,1. Less

### N points lie on a circle. You draw lines connecting all the points to each other. These lines divide up the circle into a number of regions. How many regions is this? Assume that the points are scattered in such a way as to give the maximum number of regions for that N.

13 Answers↳

it is "Moser's circle problem" 1+nC2+nC4

↳

This is a famous example in mathematics; it's often used as a warning against naive generalization. Here are the answers for the first six natural numbers: (# points) : (# regions) 1 : 1 2 : 2 3 : 4 4 : 8 5 : 16 6 : 31 Yes, 31. You can see, e.g., Conway and Guy's "The Book of Numbers" for an account of this. Less

↳

Err, 2*n-1+summation(n-1C->n-1)

### The first question he gave me was not hard. 1. You call to someone's house and asked if they have two children. The answer happens to be yes. Then you ask if one of their children is a boy. The answer happens to be yes again. What's the probability that the second child is a boy? 2. (Much harder) You call to someone's house and asked if they have two children. The answer happens to be yes. Then you ask if one of their children's name a William. The answer happens to be yes again.(We assume William is a boy's name, and that it's possible that both children are Williams) What's the probability that the second child is a boy?

12 Answers↳

1. BB, BG, GB, GG 1/4 each, which later reduced to only BB, BG, GB with 1/3 probability each. So the probability of BB is 1/3 2. Let w is the probability of the name William. Probability to have at least one William in the family for BB is 2w-w^2, For BG - w, GB - w, GG - 0. So the probability of BB with at least one William is (2w-w^2)/(2w+2w-w^2) ~ 1/2 Less

↳

The answer by Anonymous poster on Sep 28, 2014 gets closest to the answer. However, I think the calculation P[Y] = 1 - P[C1's name != William AND C2's name != William] should result in 1 - (1- e /2) ( 1- e / 2) = e - (e ^ 2 ) / 4, as opposed to poster's answer 1 - (e^2) / 4, which I think overstates the probability of Y. For e.g. let's assume that e (Probability [X is William | X is boy]) is 0.5, meaning half of all boys are named William. e - (e ^ 2) / 4 results in probability of P(Y) = 7/16; Y = C1 is William or C2 is William 1 - (e ^ 2) / 4 results in probability of P(Y) = 15/16, which is way too high; because there is more than one case possible in which we both C1 and C2 are not Williams, for e.g. if both are girls or both are boys but not named William etc) So in that case the final answer becomes: (3e/2 - (e^2)/2) * 0.5 / (e - (e ^ 2) / 4) = 3e - e^2 / 4e - e^2 = (3 - e) / (4 - e) One reason why I thought this might be incorrect was that setting e = 0, does not result in P(C2 = Boy | Y) as 0 like Anyoymous's poster does. However I think e = 0 is violates the question's assumptions. If e = 0, it means no boy is named William but question also says that William is a Boy's name. So that means there can be no person in the world named William, but then how did question come up with a person named William! Less

↳

I think second child refers the other child (the one not on the phone) In this case answer to first is 1/3 and second is (1-p)/(2-p) where p is total probability of the name William. For sanity check if all boys are named William the answers coincide. Less

### There are 25 horses. Each time you can race 5 horses together. Now you need to pick the top three horses among them. How many races do you need to conduct?

12 Answers↳

I think it is 7. Kannp, the top five are not established after 5 because the second in one race might be better than the first in another race. Have five races of five each and keep the top 3 in each races. Then, take each of the winners and race them against eachother. The two bottom and the 4 who lost to them are discarded. The two who lost to third place is discarded. And, the one who got 3rd in the race with the horse who gets second in the 6th race is discarded. Now, there are six horses left. Race all but the horse who won twice and keep the top two, combined with the horse who sat out. Now you are done in 7 races. Less

↳

He gave a very good explanation. I'll decline to explain why prm is incredibly wrong. Less

↳

7 times.

### You have 2 decks of cards (each deck contains both red and black cards). One deck has twice the number of cards in the other deck with the same color ration (so one deck has 52 cards and the other has 104, both half red and half black). I offer you to play a game. First you get to chose which deck of cards you want to play with. Second, you draw 2 cards at random from your deck of choice. If both are red, then I will give you a ferarri. Which deck of cards would you chose?

12 Answers↳

The unalert interviewee would answer "it doesn't matter, the probability is the same". While this is true for the first card, you have a higher probability of drawing a second red card with the big deck than the smaller one. So I chose the big deck (no homo) and I was right. Less

↳

extreme case deck 1: 1 red 1 black deck 2: 2 red 2 black So more cards the better chance you get... Less

↳

Mathematically: (26/52 * 25/51) vs (52/104 * 51/103) 51/206 > 25/102