Recent Posts

Eurovision 2014: Bonus final predictions

This is post is part of a series of posts describing a predictive model for the Eurovision Song Contest. The full set of posts can be found here.


And there’s more!

As an addendum to my predictions for tomorrow’s final, I thought it would be interesting to look at the distribution of finishing places for each country. One of the main motivations for this is a common misinterpretation of the list of winning probabilities I give each year.

Last year, I had listed the UK as the 19th most likely country to win the final. As it turned out, the UK came in 19th place, and a number of people congratulated me on the accuracy of the prediction. Now, as lovely as it is to be congratulated, I never predicted that the UK would come in 19th. There’s a world of difference between “19th most likely to win”, and “will most likely come 19th”. I demonstrated this two years ago in my wrap-up post for the 2012 contest, with the particular example of Malta that year.

I think the easiest way to avoid this confusion again is probably to make some actual predictions for the finishing place of each country. That way, hopefully nobody will misinterpret winning probabilities in this way.

Place ranges

What I’ve plotted here, as the green bars, is the inter-quartile range of finishing position for each country. So, for example, Austria’s bar goes from 7 to 16. This means that in 50% of simulations, Austria finished between these two positions (inclusive). In 25% of simulations, Austria finished in 7th or better, and in 25% of simulations 16th or worse. The black bar shows the average placing, which is not necessarily a whole number. In this case, on average Austria finished 11.87th.

Because we’ve got better knowledge of song quality this year, there’s definitely a strong correspondence between the order from winning probability (as seen last time and the ordering here. There are a few discrepancies though. For example, Belarus are one of the only countries which didn’t win a single simulation. However, they’re still forecast to come in ahead of France and Germany, because they can rely on votes from the other ex-Soviet bloc countries.

I should mention here that the model is generally pretty bad at scoring at the bottom end of the table. Although the dreaded “nul points” hasn’t happened since 2003, the model still predicts it in over half of simulations. This may mean that the placings near the bottom are a bit off as well.

I should also mention that it’s extremely unlikely that this exact ordering comes up. With twenty-six finalists, there are approximately four hundred trillion trillion1 possible orderings. Picking the right one by chance alone is roughly as likely as three separate entrants being independently struck by lightning during the interval act.

Because these are probabilistic forecasts, it’s likely that some of them will be wrong. In fact, it would be a problem if they weren’t2. If a model claims something will happen 50% of the time, then that thing should fail to happen 50% of the time as well. In a well-calibrated model, a 48% prediction (such as our prediction that Sweden will win) should be wrong a little over half the time.

If we take the inter-quartile ranges as predictions, then we should expect the true position to lie inside them 50% of the time3. Or, looking at it another way, we should expect half of our predictions to be right, and half to be wrong. Much more or less than that and there’s a problem with the model.

But who’s got the best song?

Another way of looking at this data is to compare how well a song is likely to finish with how “good” the model thinks it is. Of course, by this stage the model “quality score” incorporates information about the running order and probably other information, but it’s still interesting to get an idea of how the mechanics of the contest favour some countries over others.

Place vs quality

There is definitely a strong relationship between song quality and finishing position, as one would hope, but the countries which deviate from this relationship are probably the most interesting. Netherlands and Austria, according to the model, have the second and fourth best songs, and yet they finish in fifth and eleventh place respectively. This reflects their lack of established voting patterns.

On the flip side, Ukraine managed to squeeze out a third place finish from the seventh best song. Russia, even more egregiously, have a song which ranks third from the bottom in quality, but they still manage to crack the top ten in terms of mean finishing position.

Before I get deluged with complaints from Russians, I should say that the model quality score isn’t really intended to match up with objective musical quality. Instead it models the opinion of a completely average Eurovision voter4. It’s entirely possible that they just don’t like twins.


  1. Four hundred and three septillion, two hundred and ninety-one sextillion, four hundred and sixty-one quintillion, one hundred and twenty-six quadrillion, six hundred and five trillion, six hundred and thirty-five billion, five hundred and eighty-four million. You go and count, I’ll wait here.

  2. Nate Silver wrote a nice piece on checking this calibration for his March Madness basketball model.

  3. In this case it will be slightly more than 50%, because the position variable only takes on discrete values. The quantiles don’t actually divide up the set of outcomes very nicely.

  4. The best available approximation to this is a Hungarian.


Eurovision 2014: Final predictions

This is post is part of a series of posts describing a predictive model for the Eurovision Song Contest. The full set of posts can be found here.


A learning experience

It was probably reasonable to think of Tuesday’s model performance as mixed: the overall score was bad, but only because of some quite surprising qualifiers. On the other hand, last night’s performance was terrible. The particular set of qualifiers didn’t show up even once in 10,000 simulations. That’s substantially worse than you’d expect if the model was picking qualifiers at random.

The most egregious mistake was the model’s near-absolute certainty (93%) that Israel was going to qualify. This, as it turns out, was not the case. Looking into it, I think I now understand why this happened, and hopefully can avoid it next year.

The problem arose partially because of how weak the field was. One of the consequences of this was that the Betfair odds for all of the entries were extremely long. The “favourite” was Norway, with a price of 26.5 (~4% win probability). Israel had the third shortest odds, at a price of 65.0 (~1.5% win probability).

Now, as I described previously, the song quality is estimated from a linear fit to the logarithm of the odds. This means that the difference in quality between a song at odds of 2.0 and a song at odds of 4.0 is the same as between a song at 20.0 and a song at 40.0, or a song at 200.0 and a song at 400.0. This almost certainly overemphasises the differences between songs with long odds. In reality, there’s not a huge amount of difference between a song which trades at 100.0 and a song which trades at 1000.0, while a song at 2.0 and a song at 20.0 are worlds apart.

A fix for this would be to introduce more variability in quality at longer odds. This is definitely something I’ll look into for next year’s model. For this year though, I’m going to stick with things as they are, and console myself that most of the countries with truly long odds have already been eliminated.

While looking into this mistake, I also noticed a rather strange bug in the model, which resulted in some songs getting huge scores. One of the simulations showed San Marino getting a perfect 12 from every country. I’ve fixed this bug1, which affected about 5% of simulations, but it does mean that the final predictions are now made using a slightly different model to the earlier predictions, so there may be some slight changes.

For the win

Moving on to the final then, the field is looking fairly balanced. After a Yugoslav-free final last year, and with the departure of Serbia and Croatia this year, it seemed that we were entering a new era for Eurovision. However, Slovenia and Montenegro have both qualified, for the first time in Montenegro’s case. I’ve seen it suggested that the former Yugoslav diaspora may be concentrating its votes on these two. If so, it bodes well for them.

Also interesting is that we have a full complement of Scandinavians present. With most voting blocs this would serve to weaken the strongest member, but the Scandinavians are usually fairly good at avoiding this. Expect plenty of vote-swapping, but the 12s should go to Sweden.

Winning probabilities

Not a lot has changed in the model prediction since the second semi-final, mostly because we already knew that nobody from the second semi was going to win. The main change is that, because I fixed the bug that gave them superhuman powers of singing, San Marino et al don’t really win very much any more.

From the Betfair side, the odds of a Dutch win have spiked enormously, to the point where they’re now second favourite. One of the main reasons for this is that the draw has now been made for running order. In general, over the past ten years, songs in the second half of the draw have performed much better than songs in the first. This year, Ukraine have drawn the first position, and many of the other highly rated songs are in the first half. The Netherlands are performing in 24th place, two before the end, while the UK take last position.

I had intended to incorporate running order into the model for this year, but it slipped my mind. This means that the model is missing out on some information. To try to combat this, I’ve re-run the model, but instead of using Betfair data from before the contest began, I’ve used the latest data.

Winning probabilities redux

As you can see, this causes the model to greatly focus on Sweden. Sweden have a fairly middling slot in the running: they perform 13th, at the end of the first half. However, this is probably much better than their competitors Armenia and Ukraine (1st and 7th respectively). Netherlands and the UK do receive bumps up the order in this new set of simulations, but not enough to make them really competitive.

Bellwethers, likely twelves, etc

Usually I try to identify “bellwether” countries: countries whose 12 points are most likely to go to the winner. In this case, because the model is so certain about Sweden winning, this has largely boiled down to the other Scandinavian countries. So, while Denmark, Norway and Finland all have around 40% chance of picking the winner, this is largely because they’re probably going to pick Sweden, and Sweden’s probably going to win.

A few of the lower-ranked bellwethers may be more reliable in practice. Poland, Spain and Israel are only slightly less likely to pick the winner, and are probably more varied in their tastes. Toss in Hungary and Macedonia, and you’ve got yourself some predictions. As always, be aware that the voting order is chosen for dramatic purposes, once the televotes are known. It can be a fun game to try to think yourself into the mindset of a Eurovision vote order planner, assuming you’re sober enough by that point in the evening.

We don’t have Cyprus voting this year, so Eurovision’s favourite pairing (Gryprus? Cypreece?) won’t be in evidence. The model is also fairly cool on the Moldova/Romania relationship, although I’m a bit skeptical about that. There are, however, some vote patterns that we can rely on.

  • Belarus → Ukraine (48%)
  • Greece → Armenia (56%)
  • Russia → Armenia (61%)
  • Georgia → Armenia (63%)
  • Finland → Sweden (64%)
  • Netherlands → Armenia (67%)
  • Azerbaijan → Ukraine (71%)
  • Norway → Sweden (75%)
  • France → Armenia (76%)
  • Denmark → Sweden (82%)

Summing up, Sweden are the likely winners—I hope they kept all the decorations from last year. Armenia has a posse, but probably not enough to give it victory. Don’t buy the UK/Netherlands hype, unless you enjoy that sort of thing, in which case go have fun, what are you listening to me for?


P.S. I’ve made some extra predictions, which you should read if you want to know what position you can expect each country to finish in.


  1. If you’re interested, the problem was that the burn-in period for the Gibbs sampler wasn’t being discarded correctly, and so some parameters occasionally had rather stratospheric values.


Eurovision 2014: Second semi final predictions

This is post is part of a series of posts describing a predictive model for the Eurovision Song Contest. The full set of posts can be found here.


San Marino, I take it all back

Well, that was a bit surprising. Both San Marino and Montenegro qualified for the final for the first time in their histories. Either one on its own would be interesting, but both is downright bizarre. This particular set of qualifiers came up in only 18 out of 10,000 model simulations. For comparison, the most likely set of ten, which I predicted last time, came up in 628 simulations.

It’s quite likely that the model is poorly calibrated, and that it should have rated these two’s chances slightly higher. On the other hand, I think they were both always unlikely qualifiers, and I’d be very suspicious of a model which put them through with any confidence. The one upside is that now that these two have qualified for the final, the model should have a much richer set of voting data about them for next year.

So we move on, and I apologise to any Sammarinese whose hopes I may have dashed prematurely1.

Where do we stand now?

If we incorporate the knowledge of which countries qualified for the final, we can update our winning predictions. Although the odds on Betfair have also changed, I haven’t updated the model to match. This is partly for convenience, and partly because I don’t want to introduce extra biases, given that the Betfair odds now presumably incorporate a lot of data from the first semi-final, some of which is already accounted for in the model. It’s also theoretically possible that someone has read my last blog post and made bets on the basis of the predictions therein. I really hope that nobody’s making any significant financial decisions on the basis of my half-cocked calculations.

Interestingly, one of the big changes in the Betfair odds is that Sweden and Armenia have traded places at the top of the board, so the implied probabilities from the betting data look a lot more like the model outputs now. It’s unclear what’s caused that change, as we really don’t have much extra information about those two now than we did 24 hours ago. Netherlands and Hungary have also had their odds tighten, which is probably simply a reflection of their having qualified, which was at least somewhat uncertain.

The strangest thing, as far as I can see, is that the odds for the UK have also tightened substantially. While we do have some new information about the countries that performed in the first semi-final, pretty much nothing has happened which could have an impact on the UK’s chances. I think this probably supports my theory that many of those backing the UK are not operating entirely on cold financial logic.

Winning probabilities

What then is new in the model predictions? The biggest change is that our surprise qualifiers, San Marino and Montenegro (and to a lesser extent Iceland), are now showing real possibilities of winning. The probabilities are still small (about 1-2%), but that’s a lot higher than it was previously. To put this in perspective, they’re both now ranked ahead of perennial favourites Russia.

The other noticeable effect is that, due to Iceland qualifying, the chances of all the Nordic countries have been depressed slightly, as they’re likely to siphon a few votes away. This is a pretty small effect though, and Sweden has taken most of the hit. In general, everything looks quite a lot like we left it before the semi-final.

But we’re not there yet

All that is getting a bit ahead of ourselves though. First, we’ve got the second semi-final to look forward to. This is a much weaker field: the model gives less than 8% chance that one of these countries goes on to win. This also makes the qualifications a bit less predictable, at least in theory. On the other hand, this is a small semi-final, with only 15 entries, and 10 qualification spots, so we can probably expect some stuff to go through which might not otherwise.

Qualification probabilities

Greece, Norway and Romania are usually safe bets for qualification, and the model is giving them all over 95% likelihood. We can almost certainly add Israel, Finland and Austria to that list. After that, things start to drop off a bit in certainty.

It’s also fairly easy to eliminate a few unlikely qualifiers. Macedonia and Slovenia were never among the powerhouses of the former Yugoslavia, and without their neighbours for support, they’re unlikely to go any further. Belarus is similarly dependent on absent friends, and has a tendency to produce the kind of slightly odd pop song that only a totalitarian regime could love.

That leaves six countries on the bubble: Georgia, Ireland, Lithuania, Malta, Poland and Switzerland. Both Ireland and Malta benefit from the UK voting in this semi-final, and the model expects them both to qualify. Poland and Lithuania should both get some support from Ireland, but Poland also benefit from having Germany voting, so we’re likely to be seeing their uniquely creepy blend of nationalism, agriculture and heaving bosoms in the final as well.

Georgia, who are usually a safe bet for qualification, have drawn a particularly bad semifinal for them with almost no former Soviet republics for backup. According to Betfair, they’re the least likely country to qualify (price of 4.9, meaning a probability of about 20%). However, the model still reckons they’re the most likely to take the last slot, with a probability of 56%. Betfair thinks this slot will go to Switzerland, who have qualified only once in the past eight years, with Anna Rossinelli in 2011. She came last.

  1. Looking at my web stats, I can’t actually find evidence that anyone from San Marino has ever read this blog.


Eurovision 2014: First predictions

This is post is part of a series of posts describing a predictive model for the Eurovision Song Contest. The full set of posts can be found here.


For the last two years, I’ve been publishing the results of a statistical model for predicting the results of the Eurovision Song Contest. This year’s final takes place on Saturday in an abandoned shipyard in Copenhagen, so it’s time for some more predictions. I’ve made some small changes to the model this year, which have had huge consequences for the results, which I think should be a lot more accurate now.

What’s new in model-land?

The big change in this year’s model is that I’ve incorporated data from betting markets, specifically the Betfair Eurovision winner market. In previous years, the model has had no real information about song quality to go on, apart from what we know from previous contests. This year, I took a look at the relationship between a song’s betting odds and the quality score estimated by the model.

Score vs betting price

Using data from 2004-20101, I’ve plotted the betting odds available in the week before the contest against the song’s overall quality score, as estimated by the model after the fact. There’s a pretty clear relationship: better songs get shorter odds, as you might expect. It’s not perfect, but it’s definitely better than what we had before.

Interestingly, there doesn’t seem to be as good a relationship between the betting odds and actual performance in the contest. It seems that gamblers are better at taking into account the quality of a song than the complicated voting patterns which exist. This is good for us, because it means that we can use the betting odds as a proxy for song quality without worrying about double-counting voting relationships.

I’ve also removed the effect of performer gender. After a bit of experimentation, it seems that this wasn’t helping much, and may even have been making things worse. It was also a bit of a pain to classify objectively, so I’ve dropped it. There’s not much effect on the final result.

On a technical level, I’ve reimplemented the model in Julia as a learning exercise. In general, I’m pretty impressed with Julia as a language. There are some mild annoyances with the type system, but I expect that’s more a result of my slightly dodgy beginner’s code than anything to do with the language itself. Performance is pretty fantastic, and all that’s really missing is the mature package ecosystem that more established languages have.

Enough with the nerding, what about the contest?

A few more countries have dropped out (Serbia, Croatia, Bulgaria, Cyprus), mostly citing economic worries. This leaves a bit of a hole in the Balkan region, which is historically one of the stronger voting blocs. It’s not immediately clear to me what effect that will have, but it’s probably good news for the other large blocs in Scandinavia and the former Soviet Union.

Returning, we have Portugal and Poland. Portugal will probably give 12 points to Spain, but this isn’t likely to affect the outcome of the contest by much. Poland are more of a wildcard, so it’s doubtful they’ll have a huge effect either. Overall, it’s likely that there’s been a slight rebalancing of the contest from east to west.

From a geopolitical perspective, it’s obvious to ask what the effect will be of recent events in Ukraine. The EBU have ruled that televotes from the Crimea will be treated as Ukrainian votes, as Ukrainian telecoms operators are still active in the region. As Ukraine usually gives a fairly high score to Russia anyway, it’s doubtful that this will skew things greatly.

Generally speaking, it’s quite rare for international events to have a big effect on voting in the contest, but it’s conceivable that there could be a small sympathy boost for Ukraine. Given that Ukraine is in semi-serious contention anyway, a small increase in votes could be all they need. It’s unlikely that there will be much negative backlash against Russia, for the simple reason that it’s impossible to cast votes against a country in Eurovision.

Here are the results of the Bayesian jury

Anyway, what are the predictions? The betting public seem to have chosen Armenia’s Aram MP3 as their favourite, but the model likes Sweden’s Sanna Nielsen a little bit more. As I said before, Ukraine are in with an outside shot, and the probabilities drop off very quickly after that.

Winning probabilities

Compared to previous years, the model is showing very high degrees of certainty, but this is largely due to having incorporated the Betfair data. In reality, this is a year with no stand-out entries, so it’s probably more open than usual to a strong performance on the night.

If we compare the model probabilities with the implied probabilities from just the Betfair data, there are some interesting patterns. Betfair gives a much higher win probability for the UK than the model, which might be explained by the primarily UK-based customer base of Betfair. Similarly, the chances of my personal favourite, Austria’s Conchita Wurst might be overestimated by some in the west.

Interestingly, the three countries the model projects as probable winners are all competing in the first semi-final on Tuesday night, along with Azerbaijan, Russia and Hungary, all of which are also highly rated. The only entrants in the second semi-final with more than 1% chance of winning are Norway and Greece, both of which clock in around 3%. The draw for the semi-finals is largely designed to prevent regional bloc voting, and doesn’t do much to prevent unbalanced draws like this one.

Qualification probabilities

Such a strong field makes the qualifications from the first semi-final a little bit predictable. The six countries I’ve mentioned so far all have more than 95% chance of qualifying. Of the others, the Netherlands and Belgium should be back in the final, repeating last year’s success after a long absence. Moldova and Estonia are likely the last two qualifiers, but Iceland have an outside shot. San Marino, having sent the same performer every year they’ve entered, are very likely to have the same result, immediate elimination. Sorry, San Marino.

  1. This is the only data I could get hold of. If anyone reading has more recent data, please get in touch.


Eurovision 2013: Final predictions

This is post is part of a series of posts describing a predictive model for the Eurovision Song Contest. The full set of posts can be found here.


After Tuesday night’s disappointing result, I was somewhat worried about the changes to the model for this year, and considered reverting to last year’s model. However, this would be both intellectually dishonest and, more importantly, a lot of work, so I decided against it. In any event, the second semi-final threw up fewer surprises than the first, and the model did fairly admirably, predicting 8 out of 10 qualifiers. This is better than random by quite a bit, but not an improvement on last year’s model. 14 out of 20 overall is respectable, but nothing to write home about.

Let’s get this out of the way

We now have all of the information we’re going to get before the final itself takes place on Saturday night. That means it’s time to make some forecasts.

As Macedonia failed to qualify, this will be the first Eurovision final since 1985 not to feature any (former) Yugoslavian entries. It will definitely be interesting to see what this does to the voting, as all of these countries’ points become up for grabs. The former USSR, on the other hand, will be there in strength, with only Latvia letting the side down. This is probably not great news for any of the ex-Soviet states, but Russia, Ukraine and Azerbaijan can probably weather the storm.

Winning probabilities

Now that they’ve qualified, Azerbaijan have retaken the top spot from Russia, and even extended their lead. This is probably because they’re better at drawing votes from outside the former Soviet Union, whereas Russia will now be competing somewhat with the other eight former Soviet republics. Scandinavia is also very well represented in the final, which may be a blow to the chances for everybody’s favourite, Denmark (now at an implied win probability of 55% on Betfair).

Overall, the chance of an ex-Soviet winner is a very respectable 47%: with nine entries, they must have a decent song in there somewhere.

Are you as good as you think you are?

Of course, all of these probabilities are based on a very vague idea of what each song’s quality level is. The model hasn’t heard any of the songs, so there’s some very important information missing in these calculations.

It’s possibly more interesting to ask, rather than “who will win?”, “how good does a song have to be to win?”. If we have an answer to that, we can apply our own judgment to the songs which we hear tomorrow night. I’ve plotted the quality level a song has to reach before its country has a 50-50 shot at victory. As the model’s quality units are a little abstract, I’ve also included five recent winners (and one not-winner) for comparison.

Threshold qualities

For reference:

By this measure, Russia have the easiest run of things, but they’ll still need a better song than they’ve ever produced to reach this level. In fact, only five countries have produced songs which, if they entered them this year, would give them a better than 50% chance of winning. From the graph, we can obviously see that Norway, Greece and Finland have done so - if they can replicate these performances, they’ll have an excellent chance of victory. Azerbaijan have also managed this, but interestingly not with their winning song - the model claims that their 2009 entry, “Always” was considerably better.

The other country is maybe more interesting. The United Kingdom have produced two entries which were good enough to win, but failed to do so for various reasons. In 1998, Imaani came a very close second with “Where Are You?”, losing out by only six points. In 2009, Jade Ewen sang the Andrew Lloyd Webber/Diane Warren number “It’s My Time”, but lost out to the Alexander Rybak juggernaut. In a less strong year like this year, either of these songs would be easily in with a good chance of winning.

The UK are one of the most variable countries, and this isn’t something the model takes into account. In a good year, they can severely overperform the model predictions. In a bad year, they can be among the worst countries out there. It’s up to you whether you think Bonnie Tyler is at the high or low end of that spectrum.

Early indications

Some countries vote very predictably, and other countries less so. Like last year, the voting order this year will be rigged for maximum excitement, so it’s likely that the more predictable countries will be got out of the way early in the voting, to increase the suspense. However, we can still look at which countries are likely to be good predictors of the final winner.

In this case, we’re looking at the “bellwether probability”, the chance that the entry that each country gives 12 points to goes on to win the contest. The more predictable countries tend to be very low on this score. Cyprus gives its 12 to Greece almost all the time, so like a stopped clock it’s only “right” when Greece wins. On the other hand, Hungary has no particular alignments, so its votes are more likely to match with those of Europe as a whole.

Bellwether probabilities

Last year, the best predictors were a diverse group of central European countries and outliers. This year we’ve added a new and intriguing group of bellwethers. As there are no former Yugoslavian entries in the final (nor their neighbour Albania), this normally completely predictable area has sprung wide open. If an entry can appeal to this area of the map, there are a lot of points available. If only everyone had known that beforehand.

Old friends

At the other end of the scale, there are the perennial relationships that lead people to claim that the voting is “rigged”. I’m reliably informed that people last year used these as the basis of a drinking game. I couldn’t possibly condone such behaviour, but I feel I should list them for completeness.

Actually, in the absence of the Balkans and Turkey, many of the longstanding relationships are left dangling. This could be one of the most unpredictable sets of voting in recent memory. However, some relationships remain strong:

  • Lithuania → Georgia (42%)
  • Ukraine → Azerbaijan (43%)
  • Albania → Greece (45%)
  • Belarus → Russia (48%)
  • Italy → Romania (49%)
  • France → Armenia (51%)
  • Armenia → Russia (56%)
  • Moldova → Romania (59%)
  • Romania → Moldova (74%)
  • Cyprus → Greece (90%)

As I said, these are less certain than last year, so adjust beverage sizes accordingly.

I don’t have time to read all that nerd stuff

To summarise, if this is a typical year, then Azerbaijan have the best shot at things. Russia have the easiest ride of things, but don’t have quite as consistent a record as the Azeris. The UK could probably win this thing if they bother to try this year, and avoid a Humperdinck-style disaster.

Things are a little unpredictable this year, because the qualifiers are a little bit unbalanced. You can still rely on Cyprus loving Greece to prove you haven’t slipped into an alternate timeline. He who controls the Balkans controls the universe.

For listeners in the UK, I’ll be doing an interview with BBC Radio Wales on Saturday night, around 7:50pm, as part of their Eurovision coverage, live from my (and Bonnie Tyler’s) local pub.

  1. This is, according to the model, the best song never to have won Eurovision. It actually came third in 2004, behind Ukraine and Serbia/Montenegro, both of which benefitted greatly from their regional voting blocs.