hapakenya

hapakenya

PhilipJune22

Market efficiency modeling. In an attempt to figure out how to get an OCRYCOP chart that would reflect the performance of the final odds, I built a simple model that simulates the fluctuation of the odds from the opening to the closing of the line. Good bonus offers at Bettabets.ng. This model consisted of 10,000 bets, each of which used the initial and final odds.

To try to replicate the uncertainty of the "true" probability values ​​of the outcomes on which the bets were placed, I decided to randomly distribute the initial odds around a mean of 2.00 with a standard deviation (σ) of 0.15 (so about two-thirds of the odds were taken values ​​from 1.85 to 2.15 and 95% of the coefficients took values ​​from 1.70 to 2.30).


Thus, while the "true" odds of each bet, known only to Laplace's demon (and me), was 2.00, the starting odds published by the hypothetical bookmaker in my model were somewhat different from this average. I chose a standard deviation of 0.15 because it approximates the open to close price fluctuations seen in real betting markets where prices are close to 2.00.

For example, using a standard deviation of 0.05, 95% of published initial coefficients close to 2.00 would have an error of up to ±5%. Apparently, this range is too narrow: the range of observed coefficient fluctuations should be taken into account. Similarly, using a standard deviation of 0.3 or higher would lead us to assume that the bookmaker is not good at setting odds, which we know is not generally "true".

Market efficiency is an interesting concept that applies to large samples. If we cannot find out the “true” value of the probability of the outcome of a particular event, how can we find out the effectiveness of the odds of betting on this outcome?

It is highly unlikely that a bookmaker will set an odd of 3.00 if the "true" odd is 2.00. Yes, it is possible, but it usually happens as a result of an obvious error or due to an unforeseen important event that was not known at the time the coefficient was determined. Of course, in such circumstances it is reasonable to speak of a change in the "true" coefficient. Let's return to our model. I have defined some initial coefficients; what about totals?

In theory, the final odds reflect the opinions expressed by players with money. Assume that in the extreme case, the initially built-in random uncertainty remains at the same level, despite the fact that the players' opinions are based on the totality of information about the "true" probability of a particular outcome. It is clear that maintaining uncertainty at the same level is implausible, since betting markets are quite efficient in Bayesian processing of information: they are continuous clearly refine, update and improve opinions about the likelihood of an event and thus reduce the level of uncertainty.

In our model, the mean coefficient and standard deviation are 2.00 and 0.15, respectively. Now we can calculate the ratio of the initial and final coefficients in connection with each pair of such coefficients. Knowing the “true” value of the probability of one or another outcome (50%), we can calculate the expected return on bets with opening and closing odds for all 10,000 matches. Finally, we can plot the fluctuations of expected returns on rates with opening and closing prices depending on the ratio of the opening and closing prices. I plotted the chart above using Pinnacle match odds.

The first of the six graphs below shows the results of the model building. The blue and red lines show the average expected return on turnover for bets of the same size (y-axis) with opening and closing prices respectively for 50 matches under a 10,000 bet order with opening and closing odds of −1 (x-axis). The resulting values ​​are not too similar to the Pinnacle data above.

While my opening and closing prices combined are theoretically effective because they are on average the same as the “true” prices, in reality, the ratio of opening and closing prices predicts only half of the expected return (OCRYCOP = 0.5). For example, a ratio of 110% yields a return of 105% (or a 5% return on turnover) when placing a bet at the opening price, and a return of 95% (or a 5% loss on turnover) when placing a bet at the closing price.

 

Obviously, in this case, our ratio of opening and closing odds is not a good predictor of profitability. Consequently, our individual final ratios do not perform well. Of course, there is a simple reason for this. First, we already know that our individual final odds are inefficient: they don't match the "true" odds of 2.00, because I deliberately randomly distributed them around that value.

Secondly, the largest ratios of initial and final odds occur when my random odds generator produces a high initial and low final odds. The highest ratio generated by this generator was 1.55 (the initial ratio was 2.27 and the final ratio was 1.46). In fact, using a starting price of 2.27, when the "true" price is 2.00, the expected profit would be 2.27 / 2.00 - 1 = 0.135 or 13.5%, not 55% (according to my original hypothesis ).

In the above five additional graphs, the constructed model is repeated with a gradual decrease in the random variability (standard deviation) of the final coefficients used by me with a step of 0.03. At the same time, the variability of the initial coefficients remains the same. You will notice that as the volatility of the final odds decreases around the “true” odds of 2.00, the OCRYCOP value approaches 1. In the extreme case where all the final odds are 2.00, making each one perfectly effective, there is a perfect correlation. 1:1.

Let's take a look at the chart again using Pinnacle's actual betting odds. The trend lines (and their equations) are quite consistent with the example from our perfect correlation model. Yet we clearly see some underlying variability in the values: not all points are exactly on the trend lines. Of course, the positions of some of these points are due to luck or bad luck when placing bets in the real world (because my model uses expected profit, luck and bad luck are not taken into account).

Despite this, the belief that each final price is a perfect match with the "true" price is completely unjustified. However, the problem is that in the absence of absolutely effective final coefficients, we are forced to use a less than ideal correlation between the ratio of the initial and final coefficients and expected income (OCRYCOP < 1). Is there a way to solve this problem? I will address this issue in the second part of this article.

Report Page