Solving one of the elusive, complex Millennium Prize Problems is worth $1 million.
By Alyssa Oursler and Karl Roth
On May 24, 2000, famed mathematicians Sir Michael Atiyah and John Tate entered a lecture hall at the Collége de France in Paris. There, they made an announcement that put sophisticated mathematics into an unusually mainstream spotlight. The first person or team to crack any of the seven most challenging unsolved math problems would be awarded a cool $1 million prize. The Millennium Prize Problems, as they’re called, created a collective $7 million in prize money—money meant to incentivize the greatest minds to find answers to some of math’s longest standing mysteries.
A century prior, Paris was home to a very similar occasion. At the second International Congress of Mathematicians, German mathematician David Hilbert presented 23 of the most important, unsolved mathematical problems of the time—problems meant to propel the world forward into the 20th century. These 23 problems became known as the Hilbert Problems. To solve one of the Hilbert Problems would ensure immediate notoriety amongst the field of mathematics—an incentive that proved sufficient. By the time the Millennium Problems were born 100 years later, all but one of Hilbert’s problems had been solved.
As a headline in Scientific American put it, “Hilbert Walked so the Clay Mathematics Institute Could Run.” Landon Clay was a successful entrepreneur from Cambridge, Mass. and, more importantly, the source of the prize money for the Millennium Problems. It would be easy to assume that Clay, too, was a mathematician. In reality, he was an English major from Harvard who made his fortune in mutual funds and the business sector. This didn’t stop Clay from supporting the field of mathematics, though. A year prior to the announcement of the Millennium Problems, Clay started a nonprofit dedicated to research in the field. It was called, creatively, the Clay Mathematics Institute (CMI). CMI chose the Millennium Prize Problems and oversees the competition.
With the announcement of the Millennium Prize Problems, Clay succeeded in both paying homage to Hilbert and reintroducing some vigor into the mathematical community, as the Millennium Prize Problems provided the field with some long-awaited attention. Each of the seven problems has plagued the minds of prominent mathematicians for years—and to solve even one would benefit not just the field of math, but adjacent fields like physics, chemistry, and computer science.
Put simply, the answers to the Millennium Problems allow us to gain a deeper understanding of the world we’ve discovered and built around us, ushering us into underexplored aspects. With that in mind, let’s take a look at each problem. As a note, some of this will get into complex mathematics, though we try to stay on the surface and keep it as simple as possible. If at any point you find yourself getting lost, just remember: these problems have stumped some of the greatest mathematical minds too!
The Riemann Hypothesis
Of the 23 problems presented in Paris over 100 years ago, only one remains unsolved. This lingering Hilbert problem is called the Riemann Hypothesis and has, quite logically, been converted into a Millennium Problem. As the last problem standing, it’s easy to see why CMI felt the need to keep it in the global conversation.
The problem itself was first proposed in 1859 by German mathematician Bernhard Riemann. Riemann was essentially attempting to answer a longstanding question regarding the pattern of prime numbers and their distribution amongst all counting numbers. It was with his own understanding and use of the zeta function, later to be known as the Riemann Zeta Function, that Riemann was able to construct his hypothesis.
In laymen’s terms, Riemann discovered that the use of his zeta function would yield the sum of 0 for certain inputs, now known as zeta zeroes. Some zeta zeroes are considered trivial; in fact, any negative number to the left of zero that is even (–2, –4, –6, –8…) will yield the sum of zero when placed into the zeta function. But it’s the non-trivial zeros that have inspired much curiosity over the years.
Non-trivial zeros establish a very unique pattern, and that pattern acts as the central motif of the Riemann hypothesis. All of these non-trivial zeros lie within a region called the critical strip, which essentially is the space between 0 and 1. Riemann proved there are an infinite number of zeroes to be discovered in this strip. His hypothesis essentially states that all these zeroes exist on a line cut down the middle of the critical strip—directly at ½.
Why does proving or disproving this hypothesis warrant a $1 million reward? And where does this theory intersect with prime number theory? The Riemann Hypothesis birthed the discovery of a wave on the complex plane. Riemann discovered that zeta zeros were exactly what he needed to find an overlap between his use of the zeta function and the distribution of prime numbers along the infinite line of counting numbers. Once Riemann saw the connection between the use of every zeta zero and the harmonics they created, it would essentially predict the point of every prime number along an infinite line.
To prove this hypothesis would provide us with all we need to know regarding the distribution of prime numbers. As Insider summed it up, “10,000,000,000,000 prime numbers have been checked and are consistent with the equation, but there is no proof that all primes follow the pattern.” By proving all primes follow the pattern, we could solidify many mathematical uncertainties.
A lot of current factoring methods are based on the concept that the Riemann Hypothesis is true. Proof of this hypothesis would thus not only bolster current methods but allow us to develop said methods with more confidence. Its impact would also extend past theoretical mathematics all the way into fields that rely heavily on large amounts of numerical factoring like cybersecurity or encryption. Last but not least, the Riemann Hypothesis would even help act as a unification between mathematics and quantum physics.
Sir Michael Atiyah, the mathematician who helped introduce the Millennium Problems to the public, claimed in 2018 to have proven the Riemann Hypothesis. While his current claims are still being investigated, many leaders in the field have accepted that Atiyah has in fact done it. If that’s the case, Aliyah will have put an end to this longstanding problem and have earned himself a well-deserved $1 million mere decades after announcing the prize.
Yang-Mills Theory and the Mass Gap Hypothesis
There has long been a connection between the world of mathematics and the world of physics. In simplest terms, both fields exist to help us comprehend the world around us. But while progress in both fields has been made, there have been multiple breakthroughs in physics for which mathematics is yet to catch up—most circling around the discovery of quantum physics.
The holy grail of physics has long been hailed as the discovery of a grand unified theory. This theory would act as the link between the known forces of the universe: gravity, electromagnetism, strong nuclear force, and weak nuclear force. Currently, depending on the scope the physicist is using, physics is understood in either relative theory (for those, like astrophysicists, looking out to the universe on a large scale), or quantum theory (for those, like particle physicists, looking at the smallest scales). All current attempts to unify both theories have led nowhere, as all equations in the mathematical realm yield infinite answers, rendering them hollow and meaningless.
Some 80 years ago, though, a shift in physics took place—and quantum physics was born.
Physicists began discovering many different types of particles beyond the three they were already aware of: protons, neutrons, and electrons. It was accepted that these three types of particles made up matter, while photons were massless particles believed to act as a method of transportation for light.
With the discovery of new particles came a lot of confusion, as many defied what was understood to be true of the behavior of existing particles. Enter the Yang-Mills Theory: an attempt to organize the emergence of this new world. The Yang-Mills Theory was first proposed in the 1950s and represents a base for the construction of the standard model: the new method of organization for newly discovered particles using structures that also occur in geometry.
Since its conception, the Yang-Mills Theory has become the foundation for large amounts of elementary particle theory. As such, it represents an initial step towards a grand unified theory. This is because, in laymen’s terms, Yang-Mills allows physicists to derive equations for both classical and quantum settings. Under the Yang-Mills framework, another problem arises: the Mass Gap Hypothesis.
The problem itself lies simply in the idea that nobody has been able to mathematically solve any of the equations that the Yang-Mills Theory proposes. As Keith Devlin wrote in his book The Millennium Problems, “The most accurate scientific theory the world has ever seen is built upon equations that no one can solve.” Thus, the contest “challenges the mathematical community to address this issue,” Devlin writes, “first by finding a solution to the Yang-Mills equations, and second by establishing a technical property of the solution called the Mass Gap Hypothesis.”
While physicists continue to use the equations to gather numbers that are, in all actuality, alarmingly accurate, there is still an air of approximation that. As Devlin summarized, “this second part of the problem will ensure that the mathematics remains consistent with computer simulations and observations made by physicists in the laboratory.”
As stated earlier, to solve this Millennium Problem would make huge impacts in the fields of physics and mathematics. In physics, solving the Yang-Mills Theory and Mass Gap Hypothesis would be a major breakthrough in the field of theoretical physics, proving theories to exist without fear of them falling apart under mathematical scrutiny.
Additionally, as Edward Written wrote in “Physical Law and the Quest for Mathematical Understanding,” it would “shed light on a fundamental aspect of nature that physicists still do not properly understand.” In mathematics, it would essentially adopt quantum field theory as a brand-new mathematical theory, as opposed to one that exists squarely in the realm of physics.
The P vs. NP Problem
Once again, the Millennium Prize Problems are rooted in math but overlap tremendously with other fields. For this next question, we leave the world of physics to arrive in the world of computational mathematics and computer science. The third Millennium Prize Problem—the P vs. NP Problem—speaks to our ability to compute complex equations and the limitations set forth by computers.
In 1971, Stephen Cook published a seminal paper called “The Complexity of Theorem Proving Procedures.” After studying complexity theory—which “analyzes computational processes to see how efficiently they can be carried out”—Cook coined the term NP completeness.  NP stands for nondeterministic polynomial-time processes. A problem is an NP problem, according to Devlin, “if it can be solved or completed in polynomial time by a nondeterministic computer that is able to make a random choice between a range of alternatives and moreover does so with perfect luck.”
This random choice done with “perfect luck” helps streamline computer functions by making adapted yet “random” computational choices—essentially in the interest of saving computational time. As more people studied Cook’s theory, they began to realize that many of the most important established NP problems were, in fact, NP complete. If an NP problem is considered complete, it infers that it will be incredibly difficult to find a polynomial-time procedure to solve it. Thus, a large swath of computational processes were being bogged by processing data without a method of discerning where to place their computational efforts.
This is why P vs. NP stands as a considerable and worthy member of the Millennium Prize Problems. The problem deals with whether or not certain problems are worth the computational effort needed to process them. To determine as much, someone needs to prove that P equals NP—or that it doesn’t.
Currently, data encryption is theoretically considered a P problem, based on the concept that, in order to process enough possibilities to decode a lengthy encryption, one would need years and enough computational power to take on the task (not to mention the correct polynomial-time procedure). However, the task of finding a method to break that code is considered an NP problem.
If we were to discover that P and NP were the same, it would render our encryption methods completely comprised. Theoretically, they would be open to attacks as long as the attacker had enough time and computational power to reach a code-breaking result. Meanwhile, theoretically, finding out that P equals NP would also render Cook’s entire concept of NP completeness totally meaningless.
On the other hand, to prove that P and NP are in fact different comes with its own heavy task. In order to do so, one has “to show that there can be no procedure that solves the problem in polynomial time.” That means procedures both now and in the future. One has to not only prove their differences with the known procedures available, but with any theoretical procedures not yet discovered. In this regard, nobody has even come close.
How Math Birthed Computer Science
Prior to computers coming into existence, mathematicians were already considering the theory behind what it takes to solve any specific set of equations. In 1930, Kurt Gödel discovered that “in any part of mathematics that includes elementary arithmetic (which means practically any remotely useful part of mathematics), no matter how many axioms you write down, there will always be some true statements that cannot be proved from those axioms.”
An axiom is a mathematical statement presented straightforwardly enough to be considered true. Thus, axioms represent the theoretical foundation on which to conduct further equations and derive results based on the assumption that the axiom is, in fact, true. Gödel’s discovery, soon dubbed the Gödel Incompleteness Theorem, highlighted the fact that, under an axiomatized system, there will most certainly be some functions that cannot be solved or processed. This theory was the initial conceptualization of what we now know as a computable function.
Gödel inspired other mathematicians to begin to study the concept of which functions in the field of mathematics could even be computed, and which functions could not. With computability now a worthy field of study, computers themselves would soon come to be. But while this field created a need for computers, and in many ways drove them into existence, computers (which could only offer a tool to process data and large computation) were of little concern for many mathematicians whose work had migrated into more theoretical practice.
While many mathematicians had lost quickly lost interest, others became interested in the capabilities of computers and their use in the field of mathematics. It was from their interest and thinking that fields like approximation theory, dynamical systems theory, and numerical analysis came into being. These studies evolved into what we now call computer science and act as building blocks for current computer-based developments like artificial intelligence.
The Navier-Stokes Equations
It’s easy to overlook the mundane. Something like the flow of water seems so natural to us that we often forget the complexity lurking (pardon the pun) under the surface. Claude Louis Marie Henri Navier’s interest in such natural complexities is what led us to the next Millennium Problem: the Navier-Stokes Equations.
Navier, a French scientist and well-respected engineer, made his way into the public eye for his designs and work as a bridge builder. However, Navier had the passion and training of a mathematician. During his time teaching at the Ecole des Ponts et Chaussées in the 1800s, Navier began considering the mathematics of flowing fluids.
Newton first introduced the subject in 1687 with his release of the Philosophiae Naturalis Principia Mathematica, in which heset forth his laws of motion and changed the scientific world forever. His Second Law of Motion represents the foundation for the nature of moving fluids (fluids constituting both liquids and gasses).
Mathematicians would soon start diving deeper on the subject. First, David Bernoulli’s adaptation of calculus helped discern the way fluids move and act when they are introduced to a multitude of forces. Leonhard Euler furthered this concept by constructing a set of equations that attempted to describe the motion of a hypothetical viscosity-free fluid. Next, in 1822, Navier entered the conversation by reworking Euler’s equations to account for some measurable amount of viscosity.
As momentum on the study of the subject was building, it only took a few years for mathematical prodigy George Gabriel Stokes to discover that Navier’s mathematical reasoning was unfortunately imprecise, but the equations Navier had surgically stumbled upon were correct! Stokes was fascinated from the beginning of his career with the flow of fluids and would use his advanced knowledge of calculus to rediscover the equations Navier had found some 20 years prior—only this time, he would find them with the correct reasoning. This would bind the two together in history and establish the set of equations known as the Navier-Stokes Equations.
Stokes found great success in his career, and the study of the flow of fluids enjoyed the same forward progression. It very much seemed as though mathematicians were close to establishing a complete theory of fluid flow. But they eventually hit a stand still. In order to consider the flow of fluids acting in a continuous state of motion, mathematicians needed to handle two sets of infinitesimals: a sequence of “still frames” (where motion is considered as a sequence of static situations) and an infinitesimal geometric variation between two picked points in a sequence (and how they follow each other on a path).
The problem, as Devlin summarized, is that “when we try to capture the motion of the fluid at any point in terms of its motion in each of the x-,y-, and z- directions, we make life unnecessarily complicated for ourselves.” In simpler terms, the formula that actually solves the Navier-Stokes Equations has remained elusive—and no one has been able to show whether such a solution exists mathematically at all. It’s obvious to the eye that nature works out such equations. But is there an obtainable understanding of such a phenomenon in the world of mathematics? Answer this question, and you’ll be a millionaire.
Currently, there have been varying proposed solutions to simplified versions of the Navier-Stokes Equations, but none yet account for phenomena found in the third dimension without an incredible number of restrictions. As such, these proposed solutions have been deemed incomplete.
To solve these equations would give us an incredibly deep understanding of natural forces we interact with daily. For instance, to be able to calculate the flow of air in a deep, profound, mathematical way would revolutionize air travel. The solution would extend into the realm of digital media as well; the equations could be used to help our CGI rendering of natural fluids in digital landscapes found in video games and movies.
The Birch and Swinnerton-Dyer Conjecture
During the early 1960s, computers were still in their earliest form of development. In fact, only a few computers existed in the world at that time, including one at Cambridge University in England. Bryan Birch and Peter Swinnerton-Dyer, two faculty mathematicians, took advantage of their access to the Cambridge EDSAC, as the computer was called, to gather data about an array of polynomial equations. In doing so, a specific set of patterns caught their attention—patterns known as elliptic curves. With their data from studying such mathematical patterns, Birch and Swinnerton-Dyer drafted the conjecture that makes up the fifth Millennium Prize Problem.
This is difficult stuff, so bear with us here. The Birch and Swinnerton-Dyer conjecture is a statement about elliptic curves. Elliptic curves are different from ellipses; they are found when you compute the arc length of ellipses. On these curves, mathematicians have found rational points to hold some significant value. Rational points are points on the curve where both coordinates are found to be rational. They are noteworthy because most numbers found in said equations are intrinsically bound by properties of higher math functions. It’s these rational points that allow mathematicians to better understand elliptic curves, and in turn, pursue more in-depth fields like Diophantine equations and the zeros of polynomials.
In 1922, Louis Mordell found that one could essentially discern every rational point on an elliptic curve, as long as it was done from a finite subset. This may seem obvious, as some elliptic curves only have a finite number of rational points. But elliptic curves with an infinite number of rational points exist as well. Mordell’s theorem proved to be quite important with regards to progress in the field. However, it’s not always useful. When rational points are inherently large, finding them is an incredibly difficult task. In order to alleviate the burdens of such a task, mathematicians created the L-Function: “a convenient tool to associate different kinds of objects to each other, e.g., elliptic curves and modular forms,” as someone summarized succinctly on StackExchange.
The Birch and Swinnerton-Dyer Conjecture asks about the specific properties of this L-function with regards to the rank of an elliptic curve. Using Mordell’s theory to generate rational points along the curve, the rankallows one to discern how many of those rational points will spawn an infinite number of rational points. Of course, this is an incredibly simplified version of not only what this conjecture is, but also of the mathematics needed to understand it. In some ways, this might even be offensive to actual mathematicians! If your interest is piqued and you’re still with us, definitely dig a bit deeper.
Proving this conjecture true would offer some significant advances in a technology we’ve already mentioned: encryption. Currently we use a method of data encryption called RSA encryption. With a further understanding of elliptic curves, elliptic-curve cryptography could become the next standard of data encryption. This would be beneficial, as the key length is much shorter, it’s much faster to store, and it uses less memory and CPU energy.
The Hodge Conjecture
Thus far, we’ve gotten a glimpse of the degree to which mathematics extends past the basics many of us learned as children. Diving into the world of math means diving into a vast world that is connected to numerous other fields, yet extremely abstract. The next problem, the Hodge Conjecture, is perhaps the most abstract of them all. In fact, it has stumped the mathematical community in ways that would be too complicated to dig into here!
In order to gain an even rudimentary understanding of what the Hodge Conjecture states, we have to hail back to the birth of geometry. Mathematicians have been studying and collecting data on the nature of geometric shapes, and many have furthered the field in exceptional ways. Whether it was Pythagoras around 500 BC or Pascal’s work in the mid-1600s, the act of collecting shapes would continue for years. But it was Descartes’ work in 1647 that set the framework for what would later develop into the Hodge Conjecture.
Descartes speculated (and eventually, with his Cartesian Coordinate System, proved) that there was an inherent link between a geometric line and the numbers found from a set of equations. This bridge between algebra and geometry inspired a mathematical renaissance of sorts. Mathematicians became bored with the notion of studying just lines and moved onto more complex concepts.
One such concept was the introduction of more complicated equations, which were used to explore shapes that could only be conceived in an algebraic world. Put simply, these shapes defy conventional geometry. One’s imagination would be exhausted in order to comprehend them. However, in the world of algebra, they are very real. Mathematicians renamed these shapes algebraic cycles then created distinctions, dubbed algebraic varieties, to distinguish the nature of certain shapes from others.
One very specific yet immensely important algebraic variety is a manifold. This description is used when an algebraic cycle has smooth surfaces and curves, existing with no singular point. Algebraists and topologists alike began playing with the concept of drawing shapes on top of manifolds, and whether or not these shapes could be mathematically redrawn into other shapes. This proposed a new question: how do we describe any new, random shape drawn on a manifold as a sound, clean algebraic cycle?
Scottish mathematician William Hodge’s conjecture would attempt to answer this question. The conjecture itself would come as a result of his work from 1930 to 1940.As CMI’s website summarizes, “The Hodge Conjecture asserts that for particularly nice types of spaces called projective algebraic varieties, the pieces called Hodge cycles are actually (rational linear) combinations of geometric pieces called algebraic cycles.”
This conjecture would stand as a fundamental guiding principle telling us what in this field should be considered true—and what is even worth proving. These algebraic cycles are incredibly intricate objects that exist in two spaces: period integrals and Galois representations. Proof of the Hodge Conjecture would connect these two spaces and allow us to share information from one mathematical space to the next using algebraic cycles. While the Hodge Conjecture’s implications in the real world don’t extend to great heights, its impact on advanced theoretical mathematics would be tremendous.
The Poincaré Conjecture
Mathematics is comprised of theory, conjecture, and a profoundly complicated presupposition that certain theories, while maybe unproven, are still in fact true. We layer those theories upon one another, too. As such, many of the Millennium Prize Problems (if not all of them) have spawned countless new mathematical fields, even while remaining unproven. The Poincaré Conjecture, though, is the one Millennium Prize Problem that has been solved.
On March 18, 2010—nearly a decade to the day that the contest was announced in Paris—CMI formally declared that Dr. Grigoriy Perelman of St. Petersburg, Russia had resolved the conjecture. The Poincaré Conjecture was formulated by French mathematician Henri Poincaré in 1904. Poincaré’s interest in outer space eventually led him to topology, a new kind of math that some have called a form of “ultra geometry.” Topology is the study of shapes of all dimensions. Per Professor Pascal Lambrechts, the Poincaré Conjecture can be summarized by the following question: “What shape can a three-dimensional space have?”
More specifically, the conjecture considers a space that “locally looks like ordinary three-dimensional space, but is connected, finite in size, and lacks any boundaries.” Remember manifolds from the last section? This space is also known as a closed 3-manifold. The conjecture is key to achieving an understanding of three-dimensional shapes. As Professor Hayam Rubinstein summarized for The Conversation:
“A good way to visualise Poincaré’s conjecture is to examine the boundary of a ball (a two-dimensional sphere) and the boundary of a donut (called a torus). Any loop of string on a 2-sphere can be shrunk to a point while keeping it on the sphere, whereas if a loop goes around the hole in the donut, it cannot be shrunk without leaving the surface of the donut.”
Perelman’s solution to this conjecture built on a Ricci flow program developed by Professor Richard S. Hamilton. Essentially, Hamilton formulated a “dynamical process” for geometrically distorting a 3-manifold. It’s the geometric analogue to how heat spreads in a material. And yet, no one could prove the process would not be “impeded by developing singularities”—until Perelman. Perelman’s proof demonstrated a full understanding of singularity formation in Ricci flow, in addition to numerous other new elements.
Perelman didn’t dream of solving a Millennium Prize Problem for the money, though. He didn’t submit his proofs to CMI; instead, he simply uploaded them to a public website called ArXiv.org in 2002, with a final version posted to the same site about four years later. That year, Science recognized it as the “Breakthrough of the Year.” In 2008, Perelman also published his solution to the Poincaré Conjecture in Geometry and Topology. When CMI offered him $1 million for solving the first Millennium Prize Problem two years later, he declined. As BBC put it, Perelman was “a virtual recluse.” He didn’t want fame, wealth, or attention. 
While Perelman may have avoided the spotlight physically, such a monumental achievement will not be forgotten, and his name is not lost to history. “It is a major advance in the history of mathematics that will long be remembered,” James Carlson, President of CMI, said in 2010. “His ideas and methods have already found new applications in analysis and geometry; surely the future will bring many more.” A few months after offering him the $1 million, CMI and the Institut Henri Poincaré held a conference to celebrate the resolution of the conjecture. It was, of course, in Paris.
Alyssa Oursler is a PhD student and award-winning journalist. Karl Roth is a writer and musician based in Minneapolis.
 The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time. Kevin Devlin. 2002.
 “Hilbert Walked so the Clay Mathematics Institute Could Run.” Scientific American. Evelyn Lamb. Oct. 17, 2019. (https://blogs.scientificamerican.com/roots-of-unity/hilbert-walked-so-the-clay-mathematics-institute-could-run/)
 Devlin, Page 2.
 “Top mathematician says he solved the ‘single most important open problem’ in math after 160 years.” Insider. Bill Bostock. Sept. 24, 2018.
 Devlin, page 94.
 Devlin, page 94.
 “Physical Law and the Quest for Understanding.” Bulletin. Edward Witten. 2002. (https://www.ams.org/journals/bull/2003-40-01/S0273-0979-02-00969-2/S0273-0979-02-00969-2.pdf)
 Devlin, page 111.
 Devlin, page 124.
 Devlin, page 128.
 Devlin, page 108.
 Devlin, page 131–132.
 “Millennium Prize: the Navier–Stokes existence and uniqueness problem.” The Conversation. Jim Denier. Nov. 16, 2011. (https://theconversation.com/millennium-prize-the-navier-stokes-existence-and-uniqueness-problem-4244)
 Devlin, page 132.
 Devlin, page 154.
“Millennium Prize: the Birch and Swinnerton-Dyer Conjecture.” The Conversation. Daniel Dalbourgo. Nov. 30, 2100. (https://theconversation.com/millennium-prize-the-birch-and-swinnerton-dyer-conjecture-4242)
 “Why are LL-functions a big deal? Stack Exchange. (https://math.stackexchange.com/questions/1857980/why-are-l-functions-a-big-deal)
 See, for instance: “The Most Difficult Math Problem You’ve Never Heard Of – Birch and Swinnerton-Dyer Conjecture.” Kinertia. (www.youtube.com/watch?v=R9FKN9MIHlE)
 “Win a million dollars with maths, No. 4: The Hodge Conjecture.” The Guardian. Matt Parker. March 8, 2011. (https://www.theguardian.com/science/blog/2011/mar/01/million-dollars-maths-hodge-conjecture)
 “Millennium Prize: the Hodge Conjecture” The Conversation. Arun Ram. Nov. 21, 2011. (https://theconversation.com/millennium-prize-the-hodge-conjecture-4243)
 “Hodge Conjecture.” CMI. (https://www.claymath.org/millennium-problems/hodge-conjecture)
 Delvin. Page 160.
 “The Poincaré conjecture and the shape of the universe.” Pascal Lambrechts. March 2009. (https://www.wellesley.edu/sites/default/files/assets/lambrechts-colloq.pdf)
 “Earn $1,000,000 with Math? The Millennium Prize Problems.” Medium. Mark Dodds. April 18, 2019. (https://medium.com/@marktdodds/the-millennium-prize-problems-bce6c3b50222)
 “Millennium Prize: the Poincaré Conjecture.” The Conversation. Hyam Rubinstein. Nov. 28, 2011. (https://theconversation.com/millennium-prize-the-poincare-conjecture-4245)
 “Grigori Perelman.” World Heritage Encyclopedia. (http://www.self.gutenberg.org/articles/Grigori_Perelman)
 “Science’s breakthrough of the year—The Poincaré Theorem.” EurekAlert! The American Assocation for the Advancement of Science Dec. 21, 2006. (https://www.eurekalert.org/pub_releases/2006-12/aaft-bo121506.php#)
 “Russian maths genius urged to take $1m prize.” BBC. March 24, 2010. (http://news.bbc.co.uk/2/hi/europe/8585407.stm#).
 “First Clay Mathematics Institute Millennium Prize Announced Today” CMI. May 18, 2010. (https://www.claymath.org/sites/default/files/millenniumprizefull.pdf)