One reason that 1 is often excluded from the prime numbers is that if it was included, it would complicate the theorems, proofs, and exposition by the endless repetition of "not equal to 1".
> One reason that 1 is often excluded from the prime numbers is that if it was included, it would complicate the theorems, proofs, and exposition by the endless repetition of "not equal to 1".
This is true and compelling as things developed, but I think it's an explanation of where history brought us, rather than a logical inevitability. For example, I can easily imagine, in a different universe, teachers patiently explaining that we declare that the empty set is not a set, to avoid complicating theorems, proofs, and exposition by the endless repetition of "non-empty set."
(I agree that this is different, because there's no interesting "unique factorization theorem" for sets, but I can still imagine things developing this way. And, indeed, there are complications caused by allowing the empty set in a model of a structure, and someone determined to do so can make themselves pointlessly unpopular by asking "but have you considered the empty manifold?" and similar questions. See also https://mathoverflow.net/questions/45951/interesting-example....)
A good example of this is the natural numbers. Algebraists usually consider zero to be a natural number because otherwise, it's not a monoid and set theorists want zero because it's the size of the empty set. My number theory textbook defined natural numbers as positive integers, but I'm not entirely sure why.
> My number theory textbook defined natural numbers as positive integers, but I'm not entirely sure why.
Since both the inclusion and exclusion of zero are accepted definitions depending on who’s asking, books usually just pick one or define two sets (commonly denoted as N_0 and N_1). Different topics benefit from using one set over the other, as well as having to deal with division by zero, etc. Number theory tends to exclude zero.
Oh my, it had never occurred to me that one could disagree, not just about whether the natural numbers include 0 or don't, but also about how to denote "natural numbers with 0" and "natural numbers without." Personally, I'm a fan of Z_{\ge 0} and Z_{> 0}, which are a little ugly but which any mathematician, regardless of their preferred conventions, can read and understand without further explanation.
That's an interesting thought, but I think that'd break the usual trick of building up objects from the empty set, a set containing the empty set, then the set containing both of those and so forth.
That universe would be deprived from the bottomless wellspring of dryness that is the set theoretic foundations of mathematics. Unthinkable!
> That universe would be deprived from the bottomless wellspring of dryness that is the set theoretic foundations of mathematics. Unthinkable!
"Wellspring of dryness" is quite a metaphor, and I take it from that metaphor that this outcome wouldn't much bother you. I'll put in a personal defense for set theory, but only an appeal to my personal taste, since I have no expert, and barely even an amateurish, knowledge of set theory beyond the elementary; but I'll also acknowledge that set-theoretic foundations are not to everyone's taste, and that someone who has an alternate foundational system that appeals to them is doing no harm to themselves or to me.
> That's an interesting thought, but I think that'd break the usual trick of building up objects from the empty set, a set containing the empty set, then the set containing both of those and so forth.
In this alternate universe, the ZF or ZFC axioms (where C becomes, of course, "the product of sets is a set") would certainly involve, not the axiom of the empty set, but rather some sort of "axioms of sets", declaring that there exists a set. Because it's not empty, this set has at least one element, which we may extract and use to make a one-element set. Now observe that all one-element sets are set-theoretically the same, and so may indifferently be denoted by *; and then charge ahead with the construction, using not Ø, Ø ∪ {Ø}, Ø ∪ {Ø} ∪ {Ø ∪ {Ø}}, etc. but *, * ∪ {*}, * ∪ {*} ∪ {* ∪ {*}}, etc. Then all that would be left would be to decide whether our natural numbers started at the cardinality 1 of *, or if we wanted natural numbers to count quantities 1 less than the cardinality of a set.
> Many (most?) results are easier to write if you allow the empty set. For example:
> "The intersection of two sets is a set."
Many results in set theory, yes! (Or at least in elementary set theory. I'm not a set theorist by profession, so I can't speak to how often it arises in research-level set theory.) But, once one leaves set theory, the empty set can cause problems. For the first example that springs to mind, it is a cute result that, if a set S has a binary operation * such that, for every pair of elements a, b in S, there is a unique solution x to a*x = b, and a unique solution y to y*a = b, then * makes S a group ... unless S is empty!
In fact, on second thought, even in set theory, there are things like: the definition of a partial order being a well ordering would become simpler to state if the empty set were disallowed; and the axiom of choice would become just the statement that the product of sets is a set! I'm sure that I could come up with more examples where allowing empty sets complicates things, just as you could come up with more examples where it simplifies them. That there is no unambiguous answer one direction or the other is why I believe this alternate universe could exist, but we're not in it!
I don’t see why it’s a problem that the empty set cannot be a group. The empty set, being empty, lacks an identity element. Thus all groups are non-empty.
The same is true for any structure which posits the existence of some element. Of course it cannot be the empty set.
> I don’t see why it’s a problem that the empty set cannot be a group. The empty set, being empty, lacks an identity element. Thus all groups are non-empty.
It's not necessarily a problem that the empty set cannot be a group. (Although the only reason that it cannot is a definition, and, similarly, the definition of a field requires two distinct elements, which hasn't stopped some people from positing that it is a problem that there is then no field with one element.)
The problem is that there's a natural property of magmas (sets with binary operation), namely the uniquely solvability condition I mentioned, that characterizes "group or the empty set," which is more awkward than just characterizing groups. Or you may argue, fairly, that that's not a problem, but it is certainly an example where allowing the empty set to be a set complicates statements, which is all that I was meaning to illustrate. Hopefully obviously, without meaning seriously to suggest that the empty set shouldn't be a set.
(I remembered in the course of drafting this comment that https://golem.ph.utexas.edu/category/2020/08/the_group_with_... discusses, far more entertainingly and insightfully than I do, the characterization that I mention, and may have been where I learned it.)
If you don’t allow the empty set to be a set then you break the basic operations of set theory. For example, to show two sets are disjoint you compare their intersection with the empty set.
In an alternative axiomatization (without the empty set) you’re going to need to create some special element which belongs to every set and then your definition of disjoint sets is that their intersection is equal to the trivial set containing only the special element. What a clumsy hack that would be!
> If you don’t allow the empty set to be a set then you break the basic operations of set theory. For example, to show two sets are disjoint you compare their intersection with the empty set.
You certainly can do that, but it's not the only way. Even in this universe, I would expect to show that concrete sets A and B are disjoint by showing x ∈ A → x ∉ B, which makes perfect sense even without an empty set.
> In an alternative axiomatization (without the empty set) you’re going to need to create some special element which belongs to every set and then your definition of disjoint sets is that their intersection is equal to the trivial set containing only the special element. What a clumsy hack that would be!
Rather, in this alternate universe, intersection is partially defined. Again, even in this universe, we're used to accepting some operations being partial!
Rather, in this alternate universe, intersection is partially defined.
Yes, but then topology becomes a very tedious exercise because so many proofs rely on the fact that the empty set is contained in every topology, that the empty set is both closed and open, and that intersections frequently yield the empty set. With partially defined intersection you're forced to specially handle every case where two sets might be disjoint.
> Yes, but then topology becomes a very tedious exercise because so many proofs rely on the fact that the empty set is contained in every topology, that the empty set is both closed and open, and that intersections frequently yield the empty set. With partially defined intersection you're forced to specially handle every case where two sets might be disjoint.
Certainly this would be a good objection if I proposed to get rid of empty sets in our universe. (I don't!) But an alternate universe that developed this way would have either just accepted that topology was an inherently ugly subject, or worked out some equivalent workaround (for example, with testing topologies by {0, 1}-valued functions, of which we can take maxima and minima to simulate unions and intersections without worrying about the possibility of an intersection being empty), or else come up with some other approach entirely. (There is, after all, nothing sacred about a topology being specified by its open sets; see the discussion at https://mathoverflow.net/questions/19152/why-is-a-topology-m.... That's how history shook out for us, but it's hardly an inevitable concept except for those of us who have already learned to think about things that way.)
I am not claiming that this would be an improvement (my suspicion is that it would be an improvement in some ways and a regression in others), just that I think that it is not unimaginable that history could have developed this way. It would not then have seemed that the definitions and theorems were artificially avoiding the concept of an empty set, because the mathematical thought of the humans who make those definitions and theorems would simply not think of the empty set as a thing, and so would naturally have taken what seem to us like circuitous tours around it. Just as, surely, there are circuitous tours that we take in our universe, that could be made more direct if we only phrased our reasoning in terms of ... well, who knows? If I knew, then that's the math that I'd be doing, and indeed I see much of the research I do as attempting to discover the "right" direct path to the conclusion, whether or not it's the approach that fits in best with the prevailing thought.
I know that at one time we did mathematics without the number zero and that its introduction was a profound (and controversial) change. The empty set seems like a perfectly natural extension of zero as a concept. Perhaps the universe with no empty set also has no zero? Would be very interesting to see how mathematics would develop without either construct.
And if we treat zero as not a number, it would make division much easier to define. I wrote that sentence as a joke but now I wonder if maybe it’s true. Does addition really need to have an identity? Maybe we just saw that multiplication has an identity and got a bit carried away. I’m not too sure about this negative number business while we’re at it. Could be that we just took a wrong turn somewhere.
> And if we treat zero as not a number, it would make division much easier to define. I wrote that sentence as a joke but now I wonder if maybe it’s true. Does addition really need to have an identity?
It probably doesn't, but, if you want to allow negative numbers, then addition is partial unless you have 0. It's perfectly reasonable to disallow negative numbers—historically, negative numbers had to be explicitly allowed, not explicitly disallowed—but it does mean that subtraction becomes a partial operation or, phrased equivalently but perhaps more compellingly, that we have to give up on solving simple equations for x like x + 2 = 1.
Well you did say you were okay with set intersection being partial (or I guess also set difference for the more direct analogy). Maybe not everything needs a solution. (Plus we’ve just gone from division being partial to subtraction being partial…but when I say that I begin to suspect that this argument has been made a lot before and we decided that the negative numbers get to stay. I don’t have anything against them personally but they’re probably less natural than the empty set being a set.)
I might be reading too much into what you’re saying about the empty set though and you just mean we could use the word “set” to mean “non-empty set” and then say something like “set-theoretic set” to mean what we now mean when we say “set.” But that sounds like a mouthful.
> Well you did say you were okay with set intersection being partial (or I guess also set difference for the more direct analogy).
Good point!
> I don’t have anything against them personally but they’re probably less natural than the empty set being a set.
An interesting idea, which history supports: 0 was considered as a number before negative numbers were, and we still usually consider only "natural sets" and not "negative sets" (except for Schanuel: https://doi.org/10.1007/BFb0084232).
> I might be reading too much into what you’re saying about the empty set though and you just mean we could use the word “set” to mean non-empty set and then say something like “set-theoretic set” to mean what we now mean when we say “set.”
Right, or a different word entirely, just like we refer to 1 only as a number that's not prime, not as a "number-theoretic prime." But, anyway, the analogy was just the first one that sprang to mind; it doubtless has many infelicities that could be improved by a better analogy, if it's not just a worthless idea overall.
Yeah I guess what I got stuck on is that we don’t currently have a word for “a set that’s not a set” (I guess a class?) like we do for a number that’s not a prime but I think I was just lacking linguistic imagination.
The concept of “one” holds a dual role. It represents a countable unit: something you can put in a bowl and also stands for indivisibility itself. When you divide any quantity by an indivisible unit, you’re simply counting how many of those indivisibles fit within it. Then comes 2: the first number that is divisible, but only by itself and the indivisible one. That’s what makes it prime. A prime is a number divisible only by itself and by 1, the indivisible origin of all counting.
Your explanation is true of every prime. I’m pretty sure GP just meant that “2 is the only prime with the additional characteristic of being an even number”. So it’s odd (read “interesting”) in that sense, like if it would be if (for example) any number were to be the sole prime composed of exactly X digits.
And the reason we'd have to constantly exclude 1 is that it behaves in a qualitatively different way than prime numbers—and understand what this means and why that's the case is the real insight here.
Yes, it's more of a convention where we assume language like "...ignoring the trivial case of 1 being an obvious factor of every integer." It's not interesting or meaningful, so we ignore it for most cases.
"...ignoring the trivial case of 1 being an obvious factor of every integer."
I remember quite a big chunk of GEB formally defining how integers are really not trivial! The main problem seems to be is that you soon end up with circular reasoning if you are not razor sharp with your definitions. That's just in an explainer book 8)
Correct, it's impossible to specifically and formally define the natural numbers so that addition and multiplication work. Any definition of the natural numbers will also define things that look very similar to natural numbers but are not actually natural numbers.
>Any definition of the natural numbers will also define things that look very similar to natural numbers but are not actually natural numbers
This isn't correct. This is only true for first-order theories of the natural numbers using the axiom schema of induction. Second-order Peano arithmetic with the full axiom of induction has the natural numbers as its only model. This property is called "categoricity" and you can find the proof here [1] if you're interested
This isn't correct. While it's true that in second order logic the natural numbers admit categoricity, second order logic lacks axiomatic semantics. So yes, there is a single set which can be called the natural numbers in second order logic (namely the intersection of all sets that satisfy Peano's axioms), but this set has no interpretation.
You can adopt Henkin semantics to give the naturals an interpretation, which is still second order logic, but then you're back to lacking a categorical model of the naturals.
> So yes, there is a single set which can be called the natural numbers in second order logic (namely the intersection of all sets that satisfy Peano's axioms), but this set has no interpretation.
Can you explain what you mean here? Full semantics for second-order logic has a unique interpretation i.e. the standard natural numbers
Interpretation under full second‑order logic is not intrinsic to the logic itself but is always supplied by a richer meta‑theory, usually set theory/ZF. The sentence "All subsets of N" has no standalone meaning in second-order logic, it must be defined inside of the meta-theory, which in turn relies on its own meta‑theory, and so on ad infinitum.
Thus, although full second order Peano axioms are categorical, second order logic by itself never delivers a self‑contained model of the natural numbers. Any actual interpretation of the natural numbers in second order logic requires an infinite regress of background theories.
My understanding is you can specifically and formally define the natural numbers with addition and multiplication, although multiplication means the language is no longer decidable.
You can define natural numbers with just addition ( Presburger arithmetic ) and it’s decidable.
Im not sure how undecidable <=> “will define things that are similar to natural numbers but are not” but maybe I am missing something
If a sentence S is undecidable from your axioms for the natural numbers then there are two models A and B satisfying those axioms where A satisfies S and B satisfies not S. So which one is the standard natural numbers, is it A or B?
Either A or B will be an example of something that satisfies your definition of natural numbers and yet is not the natural numbers.
> Correct, it's impossible to specifically and formally define the natural numbers so that addition and multiplication work. Any definition of the natural numbers will also define things that look very similar to natural numbers but are not actually natural numbers.
Are such objects not inevitably isomorphic to the natural numbers?
Can you give an example of a formal definition that leads to something that obviously isn't the same as the naturals?
In that article you'll see references to "first order logic" and "second order logic". First order logic captures any possible finite chain of reasoning. Second order logic allows us to take logical steps that would require a potentially infinite amount of reasoning to do. Gödel's famous theorems were about the limitations of first order logic. While second order logic has no such limitations, it is also not something that humans can actually do. (We can reason about second order logic though.)
Anyways a nonstandard model of arithmetic can have all sorts of bizarre things. Such as a proof that Peano Axioms lead to a contradiction. While it might seem that this leads to a contradiction in the Peano Axioms, it doesn't because the "proof" is (from our point of view) infinitely long, and so not really a proof at all! (This is also why logicians have to draw a very careful distinction between "these axioms prove" and "these axioms prove that they prove"...)
All of these models appear to contain infinitely sized objects that are explicitly named / manipulable within the model, which makes them extensions of the Peano numbers though, or else they add other, extra axioms to the Peano model.
If you (for example) extend Peano numbers with extra axioms that state things like “hey, here are some hyperreals” or “this Goedel sentence is explicitly defined to be true (or false)” it’s unsurprising that you can end up in some weird places.
We are able to recognize that they are nonstandard because they contain numbers that we recognize are infinite. But there is absolutely no statement that can be made from within the model from which it could be discovered that those numbers are infinite.
Furthermore, it is possible to construct nonstandard models such that every statement that is true in our model, remains true in that one, and ditto for every statement that is false. They really look identical to our model, except that we know from construction that they aren't. This fact is what makes the transfer principle work in nonstandard analysis, and the ultrapower construction shows how to do it.
(My snark about NSA is that we shouldn't need the axiom of choice to find the derivative of x^2. But I do find it an interesting approach to know about.)
No additional axioms are needed for the existence of these models. On the contrary additional axioms are needed in order to eliminate them, and even still no amount of axioms can eliminate all of these extensions without introducing an inconsistency.
I know this is a great book, it’s been on my to-read list for about 5 years. But I never get to it. Is there not another (shorter) discussion I could read on this? Even an academic paper would be acceptable.
As I said above, I'm not an expert. However, I read GEB on a whim when bored at school and I think it still informs my thinking 35 years later.
Move GEB up the reading list right now! The edition I initially read was hard bound and was quite worn. I bought and read it again about 20 years ago and found more treasures.
It is a proper nerd grade treatise for non experts who are interested in maths, music and art. Really: maths, music and art from a mostly mathematical perspective. Hofstadter's writing style is very easy going and he is a master of clarity without complexity.
I don't think you need any more Maths than you would get up to age 18 or so at school to understand the entire book and probably less. Even if you gloss the formal Maths the book still works.
You could read Kurt Godels paper,but it's literally undecyperable. The book is one on the best reads ever. It will also teach you how to think in very very formal ways. It made Calculus half the class it was, and I breezed through finite math.
Propositional Calulus will teach you to think in symbols you cannot even fathom. This alone is worth every minute reading the book.
Every few years I reread it, and get a new sense of solving problems. The book can be divided into parts... But the whole...
I mean it's logically impossible to formally and specifically define the natural numbers without introducing a logical inconsistency. The best you can do is define a set that has all the properties of natural numbers but will also define things that aren't natural numbers as well.
As an analogy you could imagine trying to define the set of all animals with a bunch of rules... "1. Animals have DNA, 2. Animals ingest organic matter. 3. Animals have a nervous system. 4. ... etc..."
And this is true of all animals, but it will also be true of things that aren't animals as well, like slime molds which are not quite animals but very similar to them.
Okay so you keep adding more rules to narrow down your definition and stamp out slime molds, but you find some other thing satisfy that definition...
Now for animals maybe you can eventually have some very complex rule set that defines animals exactly and rules out all non-animals, but the principle is that this is not possible for natural numbers.
We can have rules like "0" is a natural number. For every natural number N there is a successor to it N + 1. If N + 1 = M + 1 then N = M. There is no natural number Q such that Q + 1 = 0.
Okay this is a good starting point... but just like with animals there are numbers that satisfy all of these rules but aren't natural numbers. You can keep adding more and more rules to try to stamp these numbers out, but no matter how hard, even if you add infinitely many rules, there will always be infinitely many numbers that satisfy your rules but aren't natural numbers.
In particular what you really want to say is that a natural number is finite, but no matter how hard you try there is no formal way to actually capture the concept of what it means to be finite in general so you end up with these mutant numbers that satisfy all of your rules but have infinitely many digits, and these are called non-standard natural numbers.
The reason non-standard natural numbers are a problem is because you might have a statement like "Every even integer greater than 2 can be written as the sum of two primes." and this statement might be true of the actual natural numbers but there might exist some freak mutant non-standard natural number for which it's not true. Unless your rules are able to stamp out these mutant non-standard natural numbers, then it is not possible to prove this statement, the statement becomes undecidable with respect to your rules. The only statements you can prove with respect to your rules are statements that are true of the real natural numbers as well as true of all the mutant natural numbers that your rules have not been able to stamp out.
So it's in this sense that I mean that it's not possible to specifically define the natural numbers. Any definition you come up with will also apply to mutant numbers, and these mutant numbers can get in the way of you proving things that are in principle true about the actual natural numbers.
It seems you know what you are on about! Thank you for a cracking comment.
I've always had this feeling that the foundations (integers etc) are a bit dodgy in formal Maths but just as with say Civil Engineering, your world hasn't fallen apart for at least some days and it works. Famously, in Physics involving quantum: "Shut up and calculate".
Thankfully, in the real world I just have to make web pages, file shares and glittery unicorns available to the computers belonging to paying customers. Securely ...
The foundational aspect equivalent of integers in IT might be DNS. Fuck around with either and you come unstuck rather quickly without realising exactly why until you get suitably rigorous ...
I'm also a networking bod (with some jolly expensive test gear) but that might be compared to pencils and paper for Maths 8)
If 1 is prime, then the fundamental theorem of arithmetic goes from "every positive integer can be written as a product* of primes in one and only one way" to "every positive integer can be written as a product of primes greater than 1 in one and only one way". Doesn't quite have the same ring to it. So just from an aesthetic perspective, no I'd rather 1 isn't a prime number.
It seems a little inconvenient to require acceptance that empty products equal 1, since that is also slightly subtle and deserving of its own explanation of mathematical terminology.
Of course, I generally hear the fundamental theorem of arithmetic phrased as “every integer greater than one…” which is making its own little special case for the number 1.
>It seems a little inconvenient to require acceptance that empty products equal 1
Only the contrary: it is extremely inconvenient to not allow the product of an empty sequence of numbers to equal 1. The sum of an empty sequence is 0. The Baz of an empty sequence of numbers, for any monoid Baz, is the identity element of that monoid. Any other convention is going to be very painful and full of its own exceptions.
There are no exceptions to any rules here. 1 is not prime. Every positive integer can be expressed as the unique product of powers of primes. 1's expression is [], or 0000..., or ∅.
Any convention comes with the inconvenience of definition and explanation. So to call the convention that the empty product equals 1 based on that alone seems a bit unfair. The reason the mathematical community has adopted this convention is because it makes a lot of proofs and theorems a bit easier to state. So yes, you lose a bit of convenience in one spot, and gain a bit in a whole bunch of spots.
And note that this convention is not at all required for the point I'm making regarding prime numbers. As you say yourself, restrict the theorem to integers greater than 1, and you can forget about empty products (and it is still easier to state if 1 is not prime (which it isn't)).
Isn't "every positive integer can be written as a product of primes greater than 1 in one and only one way" incorrect? A prime number is a only product of itself * 1, isn't it?
Mathematicians generally feel that a single number qualifies as a "product of 1 number." So 7 can be written as just 7 which is still considered a product of prime(s). This is purely a convention thing to make it so theorems can be stated more succinctly, as with not counting 1 as prime.
i remember something from math class about "1" and "prime" being special cases of "units" and "irreducible" (?) that made me think these kinds of definitions are much more complicated than we want them to be regardless.
The first part of your comment is completely correct. The latter is a matter of taste, of course. I think the main thing that can be said for a lot of the definitions we have in algebra is that the ones we're using are the ones that stood the test of time because they turned out to be useful. The distinction between invertible elements (units) and irreducible elements, while complicated, also gave us a conceptual framework allowing us to prove lots of interesting and useful theorems.
Some other definition fun: Should we define 0 both positive and negative, or neither positive and negative? Does monotonically increasing mean x<y -> f(y)<f(x) or x≤y -> f(x)≤f(y)? Should we deny the law of excluded middle and use constructive math? Does infinity exist? If infinity exists, is it actual (as an object) or potential (as a function)? Is the axiom of choice true? Or, is the axiom of determinacy true?
Should we use a space-time manifold, or separate space and time dimensions? Do future objects exist, and do past objects exist? Do statements about the future have a definite truth value? Does Searle's Chinese Room think? Which Ship of Theseus is the original: the slowly replaced ship, or the ship rebuilt from the original parts?
I find that so many philosophy debates actually argue over definitions rather than practical matters, because definitions do matter. Well, add your own fun definition questions!
What's worse, French typically uses positif to mean "greater than or equal to 0", so some people will act confused if you use English 'positive' instead of 'strictly positive' to mean "greater than 0".
A slightly facetious answer might be that this is the wrong question to ask, and the right question is: when did 1 stop being a prime number? To which the answer is: some time between 1933 (when the 6th edition of Hardy's _A course in pure mathematics_ was published) and 1938 (when the 7th edition was published).
Definitions are neither true nor false. They're either useful or not useful.
The question of whether or not the integer 1 is a prime doesn't make sense. The question is is it useful to define it as such and the answer is a resounding no.
Agreed. Definitions are made to differentiate things in a way useful for some goal. The question "Is X an M?" without a context or goal basically picks up whatever vague goals or purposes a person has lingering below the surface of consciousness, differing from what other participants have below theirs, leading to different answers, with no way to select the best one. In the case of what is considered prime, it's a matter of what definition simplifies the things that use it. It could be that two concepts are better, one including 1 and the other not including it. Since it's just a language shorthand, it makes no fundamental difference other than efficiency and clarity in communication about math.
While axioms are in some sense arbitrary, it is helpful if they are consistent (informally: you can't prove something that "is false"; formally: you can't prove p and not p). Also other people like it if your axioms feel obvious.
My point is that axioms "feeling obvious" is exactly a signal that they will be useful. The point of deductive reasoning based on axioms is that it is a shortcut to fill in problems of induction, which is what happens when we use pure empiricism.
If you really want to go down the road of solipsism, read Karl Popper.
You implicitly used an axiom to ignore the differences between the apples. Someone else could use different axioms to talk about the sizes of the apples (1 large + 1 small = ?), or the color of the apples (1 red + 1 green = ?), or the taste of the apples (1 sweet + 1 sour = ?).
People "axiom" their way out of 1+1=2 in this way: by changing the axioms, they change the topic, so they change the conclusion. I observe this pattern in disagreements very often.
I have used appropriate axioms, not arbitrary axioms. If you want to talk about size or color or taste, you would use “axioms” appropriate for you case.
You missed the lecture on the missuse of infinities.
If I have inf*k = inf,and dvide both sides by inf... ( The misuse) Then 1 = any K including 1/12. Now this is useless in calculus and number theory, but in quantium field theory it is a useful tool.
So inf = 1/12 and a non convergent series = a constant, but you have misused dividing infinity by itself to get it.
Infinity for division? It's useful, like counting chickens starting at zero. L'Hoptals rule is a very useful tool, but do not misuse it.
0^0 got Gemini 2.5 pro the other day for me. It claimed all indeterminate forms (in the context of limits) are also undefined as a response to a prompt dividing by zero. 0^0 is the most obvious exception, it's typically defined as =1 as you said.
I'm sure it depends on the definition of prime. I've always been partial to "Any integer with exactly 2 divisors". Short, simple, and it excludes 1 and negative numbers.
> I'm sure it depends on the definition of prime. I've always been partial to "Any integer with exactly 2 divisors". Short, simple, and it excludes 1 and negative numbers.
Depending on your definition of divisor, it excludes everything except 1 and -1, whose two integer divisors are 1 and -1. But then, if you specify that "divisor" means "positive integer divisor", it no longer automatically excludes the negative numbers, since the two positive integer divisors of -2 are 1 and 2. (Incidentally, plenty of algebraists, myself included, are perfectly comfortable with including -2 as a prime.)
This is like a "do arrays start at 0 or 1" question, except as they mention, algebraic number theory pretty much settles it. Whether 0 is a natural number though is still open for bikeshedding.
I always thought that 0-based indexes were superior until few years ago I needed to deal with Fortran code and I realized that 1-based arrays allowed to use 0 as a non-existing index or sentinel, not size_t(-1) hack as found in C/C++. Like the article explains, depending on the domain one or the other convention can be advantageous.
And then C/C++ compilers are subtly inconsistent. If 0 is valid index, then null should correspond to uintptr_t(-1), not 0 address. That lead to non-trivial complication in OS implementations to make sure that the address 0 is not mapped as from hardware point of view 0 is absolutely normal address.
No, this article makes the case for 0-based indexing. Let's ignore the reality that computer fundamentally use 0-based indexes... The article says 1 is not prime because maths gets more awkward if it is.
In the same way we index from 0 because indexing gets way more awkward if we index from 1.
In-band sentinels are both quite rare, and also equally convenient with -1 or 0. In fact I would say -1 is a bit more elegant because sometimes you need multiple sentinel values and then you can easily use -2 (what are you going to use 0 and 1 and then index from 2?).
The more common operations are things like indexing into flattened multidimensional arrays, or dealing with intervals, which are both way more elegant with 0-based indexing.
0 is a valid index into an array. It's even a valid index into global memory in some environments. Not mapping memory to address 0 is completely trivial. I'm not sure what non-trivial complications you're thinking of.
Odd to see an article about prime numbers with no mention of ideals. If (1) was a prime ideal then it would be the only non-maximal prime ideal. And it would be the only closed point in Spec(Z)...
I think 1 is so different from other numbers, it seems that in the past, some people did consider 1 to be a prime number. However, by the early 1900s, mathematicians agreed to exclude 1 from the list of primes to keep mathematical rules clear and consistent.
"Only divisible by itself and 1" is a darn elegant definition.
1, 2 and 3 are kind of special to me. In prime distribution studies, I discovered that they are special. It gets easier for some things if you consider primes only higher or equal to 5. Explaining distribution gets easier, some proofs become more obvious if you do that (tiny example: draw a ulam-like spiral around the numbers of an analog clock. 2 and 3 will become outliers and a distribution will reveal itself along the 1, 5, 7 and 11 diagonals).
Anyways, "only divisible by itself and 1" is a darn elegant definition.
When I was younger I had a period I often was thinking about prime numbers (before I got old and started thinking about the Roman Empire).
I noticed the same as you, and IIRC the (some?) ancient greeks actually had an idea about 1 as not a number, but the unit that numbers were made of. So in a different class.
2 and 3 are also different, or rather all other primes from 5 and up are neighbours to a multiple of 6, (though not all such neighbours are primes of course).
In base-6 all those primes end in 5 or 1. What is the significance? I don't know. I remember that I started thinking that 2*3=6, maybe the sequence of primes is a result of the intertwining of numbersystems in multiple dimensions or whatever? Then I started thinking about the late republic instead. ;)
If you work not only the primes, but also the modulus function value of each non-prime, things get even more interesting than thinking of base changes! To me, it reveals much more.
It's not entirely clear if that definition includes 1. On one hand 1 is certainly divisible by both itself and 1, but on the other hand they are the same number, so maybe it shouldn't count for "both", because the word "both" vaguely implies two distinct things. The usual "natural number with exactly two integer divisors" definition may not be as elegant but I think it is harder to misinterpret.
I see 1 as mostly an anchor. However, my thing is not about working out axioms and formal mathematics. I do some visualizations that can help demonstrate aspects of prime distribution.
I am fascinated by geometric proofs though. The clock thing is just a riff on Ulam's work. I believe there is more to it if one sees it as a geometric object and not just a visualization drawing. I could be wrong though.
What does primarily look like with the addition operation instead of multiply? 1,2,4,8,...? Or indeed just 1 alone lol! (Yes 1 is there because zero is the additive identity)
> One way in which 1 “quacks” like a prime is the way it accords with Euclid’s Lemma, the principle that asserts that if p is a prime, then whenever the product of two integers is divisible by p, one of the two numbers or both must be divisible by p.
I think the main issue with "1" being prime is that without "1" each positive integer can be uniquely decomposed into a product of prime numbers. It is probably the most important fact about primes, and in this context "1" does look like an imposter.
2 being the only even prime isn't really anything fundamentally weird. Every prime is the only divisible-by-that-number prime. 2 has nothing unique about that.
We only notice the case for 2 because our human languages happen to define divisible-by-2 as a word and concept. If our languages called divisible-by-3 "treven" or something like that, we'd think it weird that 3 was the only treven prime.
It’s a little weird. Numbers being their own additive inverse in characteristic-2 makes for some special cases. (But I guess if we did algebra with ternary operators, 3 might be weird too.)
Since 1 is the multiplicative identity (x * 1 = x for any x in the set) and any definition of "prime" needs to use multiplication then one way or another 1 is going to be special when talking about primes whether it is included in the set of prime numbers or not. You can't avoid 1 being "special"
All models are wrong, but some models are useful. It's not useful to consider 1 prime, so we don't. You're free to invent a new model of math where 1 is prime and see where it takes you; nobody will be offended. This happens all the time: "but what if we could take the square root of a negative number? What then?", etc. 99% of the time, this leads to a theory that is provably inconsistent and therefore useless. Out of the remaining 1%, about 99% of the time it leads to a mathematics that is simply less useful than what we have now. So it goes with making 1 prime. Out of the remaining cases, about 99% of those turn out to be identical to an already existing mathematical theory, which is interesting (and possibly publishable), but not hugely useful. But about 1% of 1% of 1% of the time, these exercises result in actual new math that can tell us new things about reality and solve problems we couldn't solve before.
I've always wondered what actually breaks if 1 is prime or conversely what defining 1 as not prime gives us. Got just far enough into my math degree before switching to CompSci to stay of of universities the rest of my life to want to know.
The biggest problem is that you lose unique prime factorization. With prime factorization, I get a unique representation of every positive integer. Let's consider a way to write positive integers in "base prime", similar to base 10 or base 2. I'll start counting from 1 and write numbers as a tuple of prime factors. Similar to base 10, "base prime" has an infinite set of 0s that we're leaving out for brevity (e.g. 19 = 0000019), although it's on the right side instead of the left.
The i th position in every tuple is the power of the i th prime in the factorization of that number. So 10 = (1, 0, 1) = 2^1 * 3^0 * 5^1. 84 would be (2, 1, 0, 1) = 2^2 * 3^1 * 5^0 * 7^1. If we have unique factorization, there is exactly one way to write every positive integer like this, and there are many insights we can gain from this factorization. If 1 is prime, then we can write 6 = 1^257 * 2^1 * 3^1, or any other power of 1 we like. We just gain nothing from it.
There are often many equivalent ways to define any mathematical object, and I'm sure there are plenty of ways to define a prime number other than "its only factors are itself and 1". These other definitions are likely to obviously exclude 1. One obvious one is the set of basis coordinates in this "unique factorization" space that I just laid out here. And we're never really excluding or making a special case for 1, because 1's factorization is simply the absence of any powers -- empty set, all 0s, whatever you want to call it.
Keep in mind that "unique factorization" turns out to be very interesting in all sorts of other mathematical objects: rings, polynomials, symmetries, vector spaces, etc. They often have their own notion of "prime" or "primitive" objects and the correspondence with integer-primes is much cleaner if we don't consider 1 prime.
Some examples are in these comments, e.g. the Fundamental Theorem of Arithmetic. The Sieve of Eratosthenes is an amusing outcome, where 1 is the only prime if you take it literally.
But also mentioned elsewhere in the thread: if we declared 1 to be a prime, then many (I daresay "most") of our theorems would have to change "prime number" to "prime number greater than one".
We could declare 4 to be a prime number, and keep the rest of the definition the same. Instead of just saying "no", you could ask, "okay, what would that do for us?" If there isn't a good answer, then what's the point? And usually, you're not in the 1% of 1% of 1%.
I've been fascinated by numbers lately, and one of my go-to tools is a simple mobile app that calculates all the divisors of a given number. So I can determine prime numbers, and readily factor the non-primes. And it's been eye-opening.
Now I'm no crackpot numerologist, adding up the numerical values of Bill Gates' name, or telling you who shot JFK. But I can tell you that the main launch pad 39A at Cape Kennedy was not numbered by accident -- look it up in the Book of Psalms. And it's interesting how the city buses around here are numbered. For example, the 68xx series; I look up Psalm 68 and I can definitely imagine the bus singing that as it lumbers down the road -- can't you?
Back to primes -- if we consider the top numbers authorities of our times, such as the US Post Office, city planners, and the telephone company (circa 1970s). I ran a chunk of ZIP codes from Southern California and discovered that some are the factors of two quite large prime numbers. Others yield interesting factors. Once again I pull out my Book of Psalms.
There are plenty of other "hermeneutics" to interpret assigned numbers, especially street addresses. And as for phone numbers, I've gone back to figuring out "what do they spell" on a standard TouchTone keypad, because sometimes it's quite informative.
It's no accident, for example, that the hospital where I was born is located at 4077 5th Avenue. And that number assigned by city planners, many decades before M*A*S*H was written or went on TV. Significant nonetheless.
I also figured out a few prime numbers related to my own life, and others that are recurring tropes, just cropping up at interesting times. What's your social security number? Have you sort of broken it down and pondered if those numbers turned up again and again in your life? Every time I see a number now, I'm compulsively factoring it out in my head. Is it prime? It feels prime. I'll check it in the app later; try some mental math for now.
I'm also counting things more often now. How many spokes in a wheel? How many petals in a flower, especially a flower depicted in art. How many brick courses in that interesting wall they built? Plug any interesting numbers back into the divisors app. Finding the primes, find the factors, just ponder numeric coincidences. It's fun. So many signs and signals, hidden in plain sight before us. Buses singing Psalm 68 as they take on passengers. Launch pads singing Psalm 39 as Europa Clipper slips the surly bonds of Earth. What's on your telephone dial?
Depends on your definition of prime, by your reasoning, I could say 7 * 1 * 1 = 7, so it's not prime. Better to say a prime is any number with a set of divisors of length 2 including 1 and itself. If you want to exclude 1.
I apologize for the ambiguity, it’s apparent when you’ve read the article as it addresses this specific point.
Namely, if the sieve is the only generating function for all of the primes then 1 would need to be omitted as prime as removing its factors would remove every number, thus failing to generate the list of primes.
If you treat one as prime number when running the sieve algorithm, one is the only prime number that remains after you have removes all its multiples from the list of candidate numbers.
One reason that 1 is often excluded from the prime numbers is that if it was included, it would complicate the theorems, proofs, and exposition by the endless repetition of "not equal to 1".
> One reason that 1 is often excluded from the prime numbers is that if it was included, it would complicate the theorems, proofs, and exposition by the endless repetition of "not equal to 1".
This is true and compelling as things developed, but I think it's an explanation of where history brought us, rather than a logical inevitability. For example, I can easily imagine, in a different universe, teachers patiently explaining that we declare that the empty set is not a set, to avoid complicating theorems, proofs, and exposition by the endless repetition of "non-empty set."
(I agree that this is different, because there's no interesting "unique factorization theorem" for sets, but I can still imagine things developing this way. And, indeed, there are complications caused by allowing the empty set in a model of a structure, and someone determined to do so can make themselves pointlessly unpopular by asking "but have you considered the empty manifold?" and similar questions. See also https://mathoverflow.net/questions/45951/interesting-example....)
A good example of this is the natural numbers. Algebraists usually consider zero to be a natural number because otherwise, it's not a monoid and set theorists want zero because it's the size of the empty set. My number theory textbook defined natural numbers as positive integers, but I'm not entirely sure why.
> My number theory textbook defined natural numbers as positive integers, but I'm not entirely sure why.
Since both the inclusion and exclusion of zero are accepted definitions depending on who’s asking, books usually just pick one or define two sets (commonly denoted as N_0 and N_1). Different topics benefit from using one set over the other, as well as having to deal with division by zero, etc. Number theory tends to exclude zero.
> commonly denoted as N_0 and N_1
Oh my, it had never occurred to me that one could disagree, not just about whether the natural numbers include 0 or don't, but also about how to denote "natural numbers with 0" and "natural numbers without." Personally, I'm a fan of Z_{\ge 0} and Z_{> 0}, which are a little ugly but which any mathematician, regardless of their preferred conventions, can read and understand without further explanation.
Yep, lots of ways to denote these sets. It’s not a disagreement but rather a preference (although certainly some folks will gladly disagree).
Number theory includes zero as the identity element for addition, much as 1 is the identity element for multiplication.
I am totally assuming you knew this already.
For the sake of making an easy transition to the monoid, yes. Personally a fan.
That's an interesting thought, but I think that'd break the usual trick of building up objects from the empty set, a set containing the empty set, then the set containing both of those and so forth.
That universe would be deprived from the bottomless wellspring of dryness that is the set theoretic foundations of mathematics. Unthinkable!
> That universe would be deprived from the bottomless wellspring of dryness that is the set theoretic foundations of mathematics. Unthinkable!
"Wellspring of dryness" is quite a metaphor, and I take it from that metaphor that this outcome wouldn't much bother you. I'll put in a personal defense for set theory, but only an appeal to my personal taste, since I have no expert, and barely even an amateurish, knowledge of set theory beyond the elementary; but I'll also acknowledge that set-theoretic foundations are not to everyone's taste, and that someone who has an alternate foundational system that appeals to them is doing no harm to themselves or to me.
> That's an interesting thought, but I think that'd break the usual trick of building up objects from the empty set, a set containing the empty set, then the set containing both of those and so forth.
In this alternate universe, the ZF or ZFC axioms (where C becomes, of course, "the product of sets is a set") would certainly involve, not the axiom of the empty set, but rather some sort of "axioms of sets", declaring that there exists a set. Because it's not empty, this set has at least one element, which we may extract and use to make a one-element set. Now observe that all one-element sets are set-theoretically the same, and so may indifferently be denoted by *; and then charge ahead with the construction, using not Ø, Ø ∪ {Ø}, Ø ∪ {Ø} ∪ {Ø ∪ {Ø}}, etc. but *, * ∪ {*}, * ∪ {*} ∪ {* ∪ {*}}, etc. Then all that would be left would be to decide whether our natural numbers started at the cardinality 1 of *, or if we wanted natural numbers to count quantities 1 less than the cardinality of a set.
I should apologize if I came off too colorful, I only meant it as a friendly jab - but my bias is showing :)
Appreciate the defense of set theory, I can't find a problem with it!
No apology needed! It's all in fun, and we might as well enjoy the discussion.
Many (most?) results are easier to write if you allow the empty set. For example:
"The intersection of two sets is a set."
> Many (most?) results are easier to write if you allow the empty set. For example:
> "The intersection of two sets is a set."
Many results in set theory, yes! (Or at least in elementary set theory. I'm not a set theorist by profession, so I can't speak to how often it arises in research-level set theory.) But, once one leaves set theory, the empty set can cause problems. For the first example that springs to mind, it is a cute result that, if a set S has a binary operation * such that, for every pair of elements a, b in S, there is a unique solution x to a*x = b, and a unique solution y to y*a = b, then * makes S a group ... unless S is empty!
In fact, on second thought, even in set theory, there are things like: the definition of a partial order being a well ordering would become simpler to state if the empty set were disallowed; and the axiom of choice would become just the statement that the product of sets is a set! I'm sure that I could come up with more examples where allowing empty sets complicates things, just as you could come up with more examples where it simplifies them. That there is no unambiguous answer one direction or the other is why I believe this alternate universe could exist, but we're not in it!
I don’t see why it’s a problem that the empty set cannot be a group. The empty set, being empty, lacks an identity element. Thus all groups are non-empty.
The same is true for any structure which posits the existence of some element. Of course it cannot be the empty set.
> I don’t see why it’s a problem that the empty set cannot be a group. The empty set, being empty, lacks an identity element. Thus all groups are non-empty.
It's not necessarily a problem that the empty set cannot be a group. (Although the only reason that it cannot is a definition, and, similarly, the definition of a field requires two distinct elements, which hasn't stopped some people from positing that it is a problem that there is then no field with one element.)
The problem is that there's a natural property of magmas (sets with binary operation), namely the uniquely solvability condition I mentioned, that characterizes "group or the empty set," which is more awkward than just characterizing groups. Or you may argue, fairly, that that's not a problem, but it is certainly an example where allowing the empty set to be a set complicates statements, which is all that I was meaning to illustrate. Hopefully obviously, without meaning seriously to suggest that the empty set shouldn't be a set.
(I remembered in the course of drafting this comment that https://golem.ph.utexas.edu/category/2020/08/the_group_with_... discusses, far more entertainingly and insightfully than I do, the characterization that I mention, and may have been where I learned it.)
If you don’t allow the empty set to be a set then you break the basic operations of set theory. For example, to show two sets are disjoint you compare their intersection with the empty set.
In an alternative axiomatization (without the empty set) you’re going to need to create some special element which belongs to every set and then your definition of disjoint sets is that their intersection is equal to the trivial set containing only the special element. What a clumsy hack that would be!
> If you don’t allow the empty set to be a set then you break the basic operations of set theory. For example, to show two sets are disjoint you compare their intersection with the empty set.
You certainly can do that, but it's not the only way. Even in this universe, I would expect to show that concrete sets A and B are disjoint by showing x ∈ A → x ∉ B, which makes perfect sense even without an empty set.
> In an alternative axiomatization (without the empty set) you’re going to need to create some special element which belongs to every set and then your definition of disjoint sets is that their intersection is equal to the trivial set containing only the special element. What a clumsy hack that would be!
Rather, in this alternate universe, intersection is partially defined. Again, even in this universe, we're used to accepting some operations being partial!
Rather, in this alternate universe, intersection is partially defined.
Yes, but then topology becomes a very tedious exercise because so many proofs rely on the fact that the empty set is contained in every topology, that the empty set is both closed and open, and that intersections frequently yield the empty set. With partially defined intersection you're forced to specially handle every case where two sets might be disjoint.
> Yes, but then topology becomes a very tedious exercise because so many proofs rely on the fact that the empty set is contained in every topology, that the empty set is both closed and open, and that intersections frequently yield the empty set. With partially defined intersection you're forced to specially handle every case where two sets might be disjoint.
Certainly this would be a good objection if I proposed to get rid of empty sets in our universe. (I don't!) But an alternate universe that developed this way would have either just accepted that topology was an inherently ugly subject, or worked out some equivalent workaround (for example, with testing topologies by {0, 1}-valued functions, of which we can take maxima and minima to simulate unions and intersections without worrying about the possibility of an intersection being empty), or else come up with some other approach entirely. (There is, after all, nothing sacred about a topology being specified by its open sets; see the discussion at https://mathoverflow.net/questions/19152/why-is-a-topology-m.... That's how history shook out for us, but it's hardly an inevitable concept except for those of us who have already learned to think about things that way.)
I am not claiming that this would be an improvement (my suspicion is that it would be an improvement in some ways and a regression in others), just that I think that it is not unimaginable that history could have developed this way. It would not then have seemed that the definitions and theorems were artificially avoiding the concept of an empty set, because the mathematical thought of the humans who make those definitions and theorems would simply not think of the empty set as a thing, and so would naturally have taken what seem to us like circuitous tours around it. Just as, surely, there are circuitous tours that we take in our universe, that could be made more direct if we only phrased our reasoning in terms of ... well, who knows? If I knew, then that's the math that I'd be doing, and indeed I see much of the research I do as attempting to discover the "right" direct path to the conclusion, whether or not it's the approach that fits in best with the prevailing thought.
You’ve given me much food for thought, thanks!
I know that at one time we did mathematics without the number zero and that its introduction was a profound (and controversial) change. The empty set seems like a perfectly natural extension of zero as a concept. Perhaps the universe with no empty set also has no zero? Would be very interesting to see how mathematics would develop without either construct.
And if we treat zero as not a number, it would make division much easier to define. I wrote that sentence as a joke but now I wonder if maybe it’s true. Does addition really need to have an identity? Maybe we just saw that multiplication has an identity and got a bit carried away. I’m not too sure about this negative number business while we’re at it. Could be that we just took a wrong turn somewhere.
> And if we treat zero as not a number, it would make division much easier to define. I wrote that sentence as a joke but now I wonder if maybe it’s true. Does addition really need to have an identity?
It probably doesn't, but, if you want to allow negative numbers, then addition is partial unless you have 0. It's perfectly reasonable to disallow negative numbers—historically, negative numbers had to be explicitly allowed, not explicitly disallowed—but it does mean that subtraction becomes a partial operation or, phrased equivalently but perhaps more compellingly, that we have to give up on solving simple equations for x like x + 2 = 1.
Well you did say you were okay with set intersection being partial (or I guess also set difference for the more direct analogy). Maybe not everything needs a solution. (Plus we’ve just gone from division being partial to subtraction being partial…but when I say that I begin to suspect that this argument has been made a lot before and we decided that the negative numbers get to stay. I don’t have anything against them personally but they’re probably less natural than the empty set being a set.)
I might be reading too much into what you’re saying about the empty set though and you just mean we could use the word “set” to mean “non-empty set” and then say something like “set-theoretic set” to mean what we now mean when we say “set.” But that sounds like a mouthful.
> Well you did say you were okay with set intersection being partial (or I guess also set difference for the more direct analogy).
Good point!
> I don’t have anything against them personally but they’re probably less natural than the empty set being a set.
An interesting idea, which history supports: 0 was considered as a number before negative numbers were, and we still usually consider only "natural sets" and not "negative sets" (except for Schanuel: https://doi.org/10.1007/BFb0084232).
> I might be reading too much into what you’re saying about the empty set though and you just mean we could use the word “set” to mean non-empty set and then say something like “set-theoretic set” to mean what we now mean when we say “set.”
Right, or a different word entirely, just like we refer to 1 only as a number that's not prime, not as a "number-theoretic prime." But, anyway, the analogy was just the first one that sprang to mind; it doubtless has many infelicities that could be improved by a better analogy, if it's not just a worthless idea overall.
Yeah I guess what I got stuck on is that we don’t currently have a word for “a set that’s not a set” (I guess a class?) like we do for a number that’s not a prime but I think I was just lacking linguistic imagination.
To be fair, 2 is also a very odd prime because it's even.
So many theorems have to say, "for every odd prime..."
https://math.stackexchange.com/questions/1177104/what-is-an-...
The concept of “one” holds a dual role. It represents a countable unit: something you can put in a bowl and also stands for indivisibility itself. When you divide any quantity by an indivisible unit, you’re simply counting how many of those indivisibles fit within it. Then comes 2: the first number that is divisible, but only by itself and the indivisible one. That’s what makes it prime. A prime is a number divisible only by itself and by 1, the indivisible origin of all counting.
> Then comes 2: the first number that is divisible, but only by itself and the indivisible one.
This does hold in the ring Z. In the ring Z[i], 2 = (1+i)*(1-i), and the two factors are prime elements.
It's actually the least odd prime
It's hardly odd.
"Even" just means "divisible by 2"
"2 is the only prime that is divisible by 2" "3 is the only prime that is divisible by 3" "5 is the only prime that is divisible by 5"
...
"N is the only prime that is divisible by N"
Exactly, we could also have a word for multiple of three or multiple of five
Threeven is used semi-seriously.
Your explanation is true of every prime. I’m pretty sure GP just meant that “2 is the only prime with the additional characteristic of being an even number”. So it’s odd (read “interesting”) in that sense, like if it would be if (for example) any number were to be the sole prime composed of exactly X digits.
It isn't odd at all! And that I'm being pendantic. But you can't say it is very odd, and then I'm the next sentence day "for every odd prime..."
"2 is the only even prime number. Therefore, it's the oddest of them all!"
And the reason we'd have to constantly exclude 1 is that it behaves in a qualitatively different way than prime numbers—and understand what this means and why that's the case is the real insight here.
Yes, it's more of a convention where we assume language like "...ignoring the trivial case of 1 being an obvious factor of every integer." It's not interesting or meaningful, so we ignore it for most cases.
Exactly. This is similar to the case of how the zero function provides a trivial solution to almost every differential equation.
I'm no expert but:
"...ignoring the trivial case of 1 being an obvious factor of every integer."
I remember quite a big chunk of GEB formally defining how integers are really not trivial! The main problem seems to be is that you soon end up with circular reasoning if you are not razor sharp with your definitions. That's just in an explainer book 8)
Then you have to define what factor means ...
Correct, it's impossible to specifically and formally define the natural numbers so that addition and multiplication work. Any definition of the natural numbers will also define things that look very similar to natural numbers but are not actually natural numbers.
>Any definition of the natural numbers will also define things that look very similar to natural numbers but are not actually natural numbers
This isn't correct. This is only true for first-order theories of the natural numbers using the axiom schema of induction. Second-order Peano arithmetic with the full axiom of induction has the natural numbers as its only model. This property is called "categoricity" and you can find the proof here [1] if you're interested
[1]: https://builds.openlogicproject.org/content/second-order-log...
This isn't correct. While it's true that in second order logic the natural numbers admit categoricity, second order logic lacks axiomatic semantics. So yes, there is a single set which can be called the natural numbers in second order logic (namely the intersection of all sets that satisfy Peano's axioms), but this set has no interpretation.
You can adopt Henkin semantics to give the naturals an interpretation, which is still second order logic, but then you're back to lacking a categorical model of the naturals.
> So yes, there is a single set which can be called the natural numbers in second order logic (namely the intersection of all sets that satisfy Peano's axioms), but this set has no interpretation.
Can you explain what you mean here? Full semantics for second-order logic has a unique interpretation i.e. the standard natural numbers
Interpretation under full second‑order logic is not intrinsic to the logic itself but is always supplied by a richer meta‑theory, usually set theory/ZF. The sentence "All subsets of N" has no standalone meaning in second-order logic, it must be defined inside of the meta-theory, which in turn relies on its own meta‑theory, and so on ad infinitum.
Thus, although full second order Peano axioms are categorical, second order logic by itself never delivers a self‑contained model of the natural numbers. Any actual interpretation of the natural numbers in second order logic requires an infinite regress of background theories.
Can you elaborate on this?
My understanding is you can specifically and formally define the natural numbers with addition and multiplication, although multiplication means the language is no longer decidable.
You can define natural numbers with just addition ( Presburger arithmetic ) and it’s decidable.
Im not sure how undecidable <=> “will define things that are similar to natural numbers but are not” but maybe I am missing something
Yeah for sure.
If a sentence S is undecidable from your axioms for the natural numbers then there are two models A and B satisfying those axioms where A satisfies S and B satisfies not S. So which one is the standard natural numbers, is it A or B?
Either A or B will be an example of something that satisfies your definition of natural numbers and yet is not the natural numbers.
> Correct, it's impossible to specifically and formally define the natural numbers so that addition and multiplication work. Any definition of the natural numbers will also define things that look very similar to natural numbers but are not actually natural numbers.
Are such objects not inevitably isomorphic to the natural numbers?
Can you give an example of a formal definition that leads to something that obviously isn't the same as the naturals?
The Peano Axioms lead to both the standard model of arithmetic (the integers that we want), and nonstandard models. See https://en.wikipedia.org/wiki/Non-standard_model_of_arithmet....
In that article you'll see references to "first order logic" and "second order logic". First order logic captures any possible finite chain of reasoning. Second order logic allows us to take logical steps that would require a potentially infinite amount of reasoning to do. Gödel's famous theorems were about the limitations of first order logic. While second order logic has no such limitations, it is also not something that humans can actually do. (We can reason about second order logic though.)
Anyways a nonstandard model of arithmetic can have all sorts of bizarre things. Such as a proof that Peano Axioms lead to a contradiction. While it might seem that this leads to a contradiction in the Peano Axioms, it doesn't because the "proof" is (from our point of view) infinitely long, and so not really a proof at all! (This is also why logicians have to draw a very careful distinction between "these axioms prove" and "these axioms prove that they prove"...)
All of these models appear to contain infinitely sized objects that are explicitly named / manipulable within the model, which makes them extensions of the Peano numbers though, or else they add other, extra axioms to the Peano model.
If you (for example) extend Peano numbers with extra axioms that state things like “hey, here are some hyperreals” or “this Goedel sentence is explicitly defined to be true (or false)” it’s unsurprising that you can end up in some weird places.
We are able to recognize that they are nonstandard because they contain numbers that we recognize are infinite. But there is absolutely no statement that can be made from within the model from which it could be discovered that those numbers are infinite.
Furthermore, it is possible to construct nonstandard models such that every statement that is true in our model, remains true in that one, and ditto for every statement that is false. They really look identical to our model, except that we know from construction that they aren't. This fact is what makes the transfer principle work in nonstandard analysis, and the ultrapower construction shows how to do it.
(My snark about NSA is that we shouldn't need the axiom of choice to find the derivative of x^2. But I do find it an interesting approach to know about.)
No additional axioms are needed for the existence of these models. On the contrary additional axioms are needed in order to eliminate them, and even still no amount of axioms can eliminate all of these extensions without introducing an inconsistency.
Do you have a link to where I could learn more about this?
You might start here: https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
That's the GEB I mentioned above.
I know this is a great book, it’s been on my to-read list for about 5 years. But I never get to it. Is there not another (shorter) discussion I could read on this? Even an academic paper would be acceptable.
There's no shortage of blog posts on the topic, but here is one that is fairly rigorous but doesn't assume too much background knowledge:
https://risingentropy.com/a-result-on-the-incompleteness-of-...
As I said above, I'm not an expert. However, I read GEB on a whim when bored at school and I think it still informs my thinking 35 years later.
Move GEB up the reading list right now! The edition I initially read was hard bound and was quite worn. I bought and read it again about 20 years ago and found more treasures.
It is a proper nerd grade treatise for non experts who are interested in maths, music and art. Really: maths, music and art from a mostly mathematical perspective. Hofstadter's writing style is very easy going and he is a master of clarity without complexity.
I don't think you need any more Maths than you would get up to age 18 or so at school to understand the entire book and probably less. Even if you gloss the formal Maths the book still works.
You could read Kurt Godels paper,but it's literally undecyperable. The book is one on the best reads ever. It will also teach you how to think in very very formal ways. It made Calculus half the class it was, and I breezed through finite math.
Propositional Calulus will teach you to think in symbols you cannot even fathom. This alone is worth every minute reading the book.
Every few years I reread it, and get a new sense of solving problems. The book can be divided into parts... But the whole...
What do you mean by "not actually"?
Edit: do you mean literally impossible?
I mean it's logically impossible to formally and specifically define the natural numbers without introducing a logical inconsistency. The best you can do is define a set that has all the properties of natural numbers but will also define things that aren't natural numbers as well.
As an analogy you could imagine trying to define the set of all animals with a bunch of rules... "1. Animals have DNA, 2. Animals ingest organic matter. 3. Animals have a nervous system. 4. ... etc..."
And this is true of all animals, but it will also be true of things that aren't animals as well, like slime molds which are not quite animals but very similar to them.
Okay so you keep adding more rules to narrow down your definition and stamp out slime molds, but you find some other thing satisfy that definition...
Now for animals maybe you can eventually have some very complex rule set that defines animals exactly and rules out all non-animals, but the principle is that this is not possible for natural numbers.
We can have rules like "0" is a natural number. For every natural number N there is a successor to it N + 1. If N + 1 = M + 1 then N = M. There is no natural number Q such that Q + 1 = 0.
Okay this is a good starting point... but just like with animals there are numbers that satisfy all of these rules but aren't natural numbers. You can keep adding more and more rules to try to stamp these numbers out, but no matter how hard, even if you add infinitely many rules, there will always be infinitely many numbers that satisfy your rules but aren't natural numbers.
In particular what you really want to say is that a natural number is finite, but no matter how hard you try there is no formal way to actually capture the concept of what it means to be finite in general so you end up with these mutant numbers that satisfy all of your rules but have infinitely many digits, and these are called non-standard natural numbers.
The reason non-standard natural numbers are a problem is because you might have a statement like "Every even integer greater than 2 can be written as the sum of two primes." and this statement might be true of the actual natural numbers but there might exist some freak mutant non-standard natural number for which it's not true. Unless your rules are able to stamp out these mutant non-standard natural numbers, then it is not possible to prove this statement, the statement becomes undecidable with respect to your rules. The only statements you can prove with respect to your rules are statements that are true of the real natural numbers as well as true of all the mutant natural numbers that your rules have not been able to stamp out.
So it's in this sense that I mean that it's not possible to specifically define the natural numbers. Any definition you come up with will also apply to mutant numbers, and these mutant numbers can get in the way of you proving things that are in principle true about the actual natural numbers.
It seems you know what you are on about! Thank you for a cracking comment.
I've always had this feeling that the foundations (integers etc) are a bit dodgy in formal Maths but just as with say Civil Engineering, your world hasn't fallen apart for at least some days and it works. Famously, in Physics involving quantum: "Shut up and calculate".
Thankfully, in the real world I just have to make web pages, file shares and glittery unicorns available to the computers belonging to paying customers. Securely ...
The foundational aspect equivalent of integers in IT might be DNS. Fuck around with either and you come unstuck rather quickly without realising exactly why until you get suitably rigorous ...
I'm also a networking bod (with some jolly expensive test gear) but that might be compared to pencils and paper for Maths 8)
Second-order arithmetic formalizes both natural and real numbers. Though it has a host of issues with inference.
The previous poster didn’t describe the natural numbers as trivial. Rather, described a case as trivial.
Specifically, the case of the divisor being 1.
Seems that if we must add all these conditions to make the definition of prime consistent, maybe we shouldn't consider it prime?
If 1 is prime, then the fundamental theorem of arithmetic goes from "every positive integer can be written as a product* of primes in one and only one way" to "every positive integer can be written as a product of primes greater than 1 in one and only one way". Doesn't quite have the same ring to it. So just from an aesthetic perspective, no I'd rather 1 isn't a prime number.
* empty products being 1 of course
Not just that one; practically every useful theorem about primes would have to be rewritten to "if p is a prime other than 1".
That is the first time I thought of 1 as being the product of []
Thay is enough justification for me of 1 not being prime. It has a factorisation!
It seems a little inconvenient to require acceptance that empty products equal 1, since that is also slightly subtle and deserving of its own explanation of mathematical terminology.
Of course, I generally hear the fundamental theorem of arithmetic phrased as “every integer greater than one…” which is making its own little special case for the number 1.
>It seems a little inconvenient to require acceptance that empty products equal 1
Only the contrary: it is extremely inconvenient to not allow the product of an empty sequence of numbers to equal 1. The sum of an empty sequence is 0. The Baz of an empty sequence of numbers, for any monoid Baz, is the identity element of that monoid. Any other convention is going to be very painful and full of its own exceptions.
There are no exceptions to any rules here. 1 is not prime. Every positive integer can be expressed as the unique product of powers of primes. 1's expression is [], or 0000..., or ∅.
That’s not what I meant. I agree that the empty product being equal to 1 is reasonable.
I meant that it’s inconvenient to require engaging with that concept directly in the everyday definition of prime numbers.
Any convention comes with the inconvenience of definition and explanation. So to call the convention that the empty product equals 1 based on that alone seems a bit unfair. The reason the mathematical community has adopted this convention is because it makes a lot of proofs and theorems a bit easier to state. So yes, you lose a bit of convenience in one spot, and gain a bit in a whole bunch of spots.
And note that this convention is not at all required for the point I'm making regarding prime numbers. As you say yourself, restrict the theorem to integers greater than 1, and you can forget about empty products (and it is still easier to state if 1 is not prime (which it isn't)).
Isn't "every positive integer can be written as a product of primes greater than 1 in one and only one way" incorrect? A prime number is a only product of itself * 1, isn't it?
Mathematicians generally feel that a single number qualifies as a "product of 1 number." So 7 can be written as just 7 which is still considered a product of prime(s). This is purely a convention thing to make it so theorems can be stated more succinctly, as with not counting 1 as prime.
Ah, OK, thank you.
1 is not greater than 1, and a product of one prime is still a product of primes
Yeah, I didn't understand you can have a product of a single number.
Empty product and also 0! (Which is extra unintuitive because 0 isn’t empty ie the products of 0 are 0)
i remember something from math class about "1" and "prime" being special cases of "units" and "irreducible" (?) that made me think these kinds of definitions are much more complicated than we want them to be regardless.
The first part of your comment is completely correct. The latter is a matter of taste, of course. I think the main thing that can be said for a lot of the definitions we have in algebra is that the ones we're using are the ones that stood the test of time because they turned out to be useful. The distinction between invertible elements (units) and irreducible elements, while complicated, also gave us a conceptual framework allowing us to prove lots of interesting and useful theorems.
Some other definition fun: Should we define 0 both positive and negative, or neither positive and negative? Does monotonically increasing mean x<y -> f(y)<f(x) or x≤y -> f(x)≤f(y)? Should we deny the law of excluded middle and use constructive math? Does infinity exist? If infinity exists, is it actual (as an object) or potential (as a function)? Is the axiom of choice true? Or, is the axiom of determinacy true?
Should we use a space-time manifold, or separate space and time dimensions? Do future objects exist, and do past objects exist? Do statements about the future have a definite truth value? Does Searle's Chinese Room think? Which Ship of Theseus is the original: the slowly replaced ship, or the ship rebuilt from the original parts?
I find that so many philosophy debates actually argue over definitions rather than practical matters, because definitions do matter. Well, add your own fun definition questions!
What's worse, French typically uses positif to mean "greater than or equal to 0", so some people will act confused if you use English 'positive' instead of 'strictly positive' to mean "greater than 0".
Another very interesting article on the primality of 1 is Evelyn Lamb's _Why isn't 1 a prime number?_ (https://www.scientificamerican.com/blog/roots-of-unity/why-i...)
A slightly facetious answer might be that this is the wrong question to ask, and the right question is: when did 1 stop being a prime number? To which the answer is: some time between 1933 (when the 6th edition of Hardy's _A course in pure mathematics_ was published) and 1938 (when the 7th edition was published).
Just a note from your friendly philosophy degree holder:
Axioms are arbitrary. Use the axioms that are the most useful.
Definitions are neither true nor false. They're either useful or not useful.
The question of whether or not the integer 1 is a prime doesn't make sense. The question is is it useful to define it as such and the answer is a resounding no.
Agreed. Definitions are made to differentiate things in a way useful for some goal. The question "Is X an M?" without a context or goal basically picks up whatever vague goals or purposes a person has lingering below the surface of consciousness, differing from what other participants have below theirs, leading to different answers, with no way to select the best one. In the case of what is considered prime, it's a matter of what definition simplifies the things that use it. It could be that two concepts are better, one including 1 and the other not including it. Since it's just a language shorthand, it makes no fundamental difference other than efficiency and clarity in communication about math.
And as is demonstrated by this article, arguing about axioms is a very useful way of doing math exposition :)
While axioms are in some sense arbitrary, it is helpful if they are consistent (informally: you can't prove something that "is false"; formally: you can't prove p and not p). Also other people like it if your axioms feel obvious.
My point is that axioms "feeling obvious" is exactly a signal that they will be useful. The point of deductive reasoning based on axioms is that it is a shortcut to fill in problems of induction, which is what happens when we use pure empiricism.
If you really want to go down the road of solipsism, read Karl Popper.
You can’t axiom your way out of 1 apple and 1 apple being 2 apples together. So axioms are not really that arbitrary.
You implicitly used an axiom to ignore the differences between the apples. Someone else could use different axioms to talk about the sizes of the apples (1 large + 1 small = ?), or the color of the apples (1 red + 1 green = ?), or the taste of the apples (1 sweet + 1 sour = ?).
People "axiom" their way out of 1+1=2 in this way: by changing the axioms, they change the topic, so they change the conclusion. I observe this pattern in disagreements very often.
I have used appropriate axioms, not arbitrary axioms. If you want to talk about size or color or taste, you would use “axioms” appropriate for you case.
They are, by definition. The reason why we choose them is exactly to map a deductive framework onto an inductive reality.
That doesn’t seem to match the definition of “arbitrary”.
We can choose whichever axioms we want. There is still arguments over the axiom of choice, but nobody cares because it’s entirely helpful.
Other good nerd-sniping math questions:
0^0 = 1? Yes, it’s simpler that way.
0! = 1? Yes, it’s simpler that way.
0/0 = ∞? No, it’s undefined.
0.9999… = 1? Yes, it’s just two ways of expressing the same number.
1+2+3+… = -1/12? No, but if it did have a finite value, that’s what it would be.
> 0.9999… = 1? Yes, it’s just two ways of expressing the same number.
More a question of place-value representation systems than what most people are thinking of which is 1 - ε.
> 1+2+3+… = -1/12? No, but if it did have a finite value, that’s what it would be.
The other ones, sure, but I'm not following this one.
https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B...
Well that's really fun! I had no idea, thank you.
You missed the lecture on the missuse of infinities.
If I have inf*k = inf,and dvide both sides by inf... ( The misuse) Then 1 = any K including 1/12. Now this is useless in calculus and number theory, but in quantium field theory it is a useful tool.
So inf = 1/12 and a non convergent series = a constant, but you have misused dividing infinity by itself to get it.
Infinity for division? It's useful, like counting chickens starting at zero. L'Hoptals rule is a very useful tool, but do not misuse it.
[dead]
If we try to define division by zero, shouldnt 0/0 be 1?
Or even more abstract "every element on y". Which I think could sort of work
But that would mean (0/0) * 2 = 2 but (0/0) * (2/1) = (0 * 2) / (0 * 1) = 0/0 = 1
0^0 got Gemini 2.5 pro the other day for me. It claimed all indeterminate forms (in the context of limits) are also undefined as a response to a prompt dividing by zero. 0^0 is the most obvious exception, it's typically defined as =1 as you said.
I'm sure it depends on the definition of prime. I've always been partial to "Any integer with exactly 2 divisors". Short, simple, and it excludes 1 and negative numbers.
> I'm sure it depends on the definition of prime. I've always been partial to "Any integer with exactly 2 divisors". Short, simple, and it excludes 1 and negative numbers.
Depending on your definition of divisor, it excludes everything except 1 and -1, whose two integer divisors are 1 and -1. But then, if you specify that "divisor" means "positive integer divisor", it no longer automatically excludes the negative numbers, since the two positive integer divisors of -2 are 1 and 2. (Incidentally, plenty of algebraists, myself included, are perfectly comfortable with including -2 as a prime.)
This is like a "do arrays start at 0 or 1" question, except as they mention, algebraic number theory pretty much settles it. Whether 0 is a natural number though is still open for bikeshedding.
I always thought that 0-based indexes were superior until few years ago I needed to deal with Fortran code and I realized that 1-based arrays allowed to use 0 as a non-existing index or sentinel, not size_t(-1) hack as found in C/C++. Like the article explains, depending on the domain one or the other convention can be advantageous.
And then C/C++ compilers are subtly inconsistent. If 0 is valid index, then null should correspond to uintptr_t(-1), not 0 address. That lead to non-trivial complication in OS implementations to make sure that the address 0 is not mapped as from hardware point of view 0 is absolutely normal address.
No, this article makes the case for 0-based indexing. Let's ignore the reality that computer fundamentally use 0-based indexes... The article says 1 is not prime because maths gets more awkward if it is.
In the same way we index from 0 because indexing gets way more awkward if we index from 1.
In-band sentinels are both quite rare, and also equally convenient with -1 or 0. In fact I would say -1 is a bit more elegant because sometimes you need multiple sentinel values and then you can easily use -2 (what are you going to use 0 and 1 and then index from 2?).
The more common operations are things like indexing into flattened multidimensional arrays, or dealing with intervals, which are both way more elegant with 0-based indexing.
0 is a valid index into an array. It's even a valid index into global memory in some environments. Not mapping memory to address 0 is completely trivial. I'm not sure what non-trivial complications you're thinking of.
Odd to see an article about prime numbers with no mention of ideals. If (1) was a prime ideal then it would be the only non-maximal prime ideal. And it would be the only closed point in Spec(Z)...
1 is not a prime number because it would ruin the Euler product formula for the Riemann zeta function.
I think 1 is so different from other numbers, it seems that in the past, some people did consider 1 to be a prime number. However, by the early 1900s, mathematicians agreed to exclude 1 from the list of primes to keep mathematical rules clear and consistent.
"Only divisible by itself and 1" is a darn elegant definition.
1, 2 and 3 are kind of special to me. In prime distribution studies, I discovered that they are special. It gets easier for some things if you consider primes only higher or equal to 5. Explaining distribution gets easier, some proofs become more obvious if you do that (tiny example: draw a ulam-like spiral around the numbers of an analog clock. 2 and 3 will become outliers and a distribution will reveal itself along the 1, 5, 7 and 11 diagonals).
Anyways, "only divisible by itself and 1" is a darn elegant definition.
When I was younger I had a period I often was thinking about prime numbers (before I got old and started thinking about the Roman Empire).
I noticed the same as you, and IIRC the (some?) ancient greeks actually had an idea about 1 as not a number, but the unit that numbers were made of. So in a different class.
2 and 3 are also different, or rather all other primes from 5 and up are neighbours to a multiple of 6, (though not all such neighbours are primes of course).
In base-6 all those primes end in 5 or 1. What is the significance? I don't know. I remember that I started thinking that 2*3=6, maybe the sequence of primes is a result of the intertwining of numbersystems in multiple dimensions or whatever? Then I started thinking about the late republic instead. ;)
If you work not only the primes, but also the modulus function value of each non-prime, things get even more interesting than thinking of base changes! To me, it reveals much more.
Also, rearrangements.
In two dimensions is easier.
I cannot rearrange one pebble.
I can rearrange two or three pebbles equidistant from each other in just one distinct way (inverting the position of a neighbouring pebble).
And so on...
There are many ways to think of natural numbers without actual numbers.
It's not entirely clear if that definition includes 1. On one hand 1 is certainly divisible by both itself and 1, but on the other hand they are the same number, so maybe it shouldn't count for "both", because the word "both" vaguely implies two distinct things. The usual "natural number with exactly two integer divisors" definition may not be as elegant but I think it is harder to misinterpret.
I never used the word "both" there.
But thanks anyway! I learned a thing.
The 1 exception matters as well for prime mutuality, like X and Y share no common factors other than 1 of course, sigh.
I see 1 as mostly an anchor. However, my thing is not about working out axioms and formal mathematics. I do some visualizations that can help demonstrate aspects of prime distribution.
I am fascinated by geometric proofs though. The clock thing is just a riff on Ulam's work. I believe there is more to it if one sees it as a geometric object and not just a visualization drawing. I could be wrong though.
What does primarily look like with the addition operation instead of multiply? 1,2,4,8,...? Or indeed just 1 alone lol! (Yes 1 is there because zero is the additive identity)
> One way in which 1 “quacks” like a prime is the way it accords with Euclid’s Lemma, the principle that asserts that if p is a prime, then whenever the product of two integers is divisible by p, one of the two numbers or both must be divisible by p.
This is debunked by https://ncatlab.org/nlab/show/too+simple+to+be+simple#relati...
I think the main issue with "1" being prime is that without "1" each positive integer can be uniquely decomposed into a product of prime numbers. It is probably the most important fact about primes, and in this context "1" does look like an imposter.
In programmer terms, imagine you had to define the product function in Python. The most natural way to write it is:
In which case there is no need to make 1 a prime as you already have:1 is not a prime number because Disquisitiones Arithmeticae did not regard it to be a prime number, and that book is the basis of number theory.
Can we declare 2 composite? Kind of annoying to have an even number in there.
2 being the only even prime isn't really anything fundamentally weird. Every prime is the only divisible-by-that-number prime. 2 has nothing unique about that.
We only notice the case for 2 because our human languages happen to define divisible-by-2 as a word and concept. If our languages called divisible-by-3 "treven" or something like that, we'd think it weird that 3 was the only treven prime.
It’s a little weird. Numbers being their own additive inverse in characteristic-2 makes for some special cases. (But I guess if we did algebra with ternary operators, 3 might be weird too.)
Only if we can declare 3 composite because it's annoying to have a number divisible by 3 in the primes, and so on for the rest of them.
It’s composite in the Gaussian integers, maybe that helps.
Since 2 is prime 1, wouldn't it be more symmetric if 1 was prime 2?
What makes you think two is prime? Not everyone would agree with that statement as the artical points out.
The article states that historically Nicomachus of Gerasa didn't consider 2 a prime, in like 100 AD.
Nowadays 2 is considered prime. Seems silly to question why someone is claiming 2 is prime if that is how it is defined in modern day.
> What makes you think two is prime
The current mathematical definition of a prime number
> What makes you think two is prime
I will admit that for me it was being brainwashed through years of high school and university mathematics.
No respect for Nicomachus of Gerasa, huh?
Technically yes
If it is unique factorization in terms of prime numbers goes out of the window and that is the main reason it usually isn;t considered.
Since 1 is the multiplicative identity (x * 1 = x for any x in the set) and any definition of "prime" needs to use multiplication then one way or another 1 is going to be special when talking about primes whether it is included in the set of prime numbers or not. You can't avoid 1 being "special"
All models are wrong, but some models are useful. It's not useful to consider 1 prime, so we don't. You're free to invent a new model of math where 1 is prime and see where it takes you; nobody will be offended. This happens all the time: "but what if we could take the square root of a negative number? What then?", etc. 99% of the time, this leads to a theory that is provably inconsistent and therefore useless. Out of the remaining 1%, about 99% of the time it leads to a mathematics that is simply less useful than what we have now. So it goes with making 1 prime. Out of the remaining cases, about 99% of those turn out to be identical to an already existing mathematical theory, which is interesting (and possibly publishable), but not hugely useful. But about 1% of 1% of 1% of the time, these exercises result in actual new math that can tell us new things about reality and solve problems we couldn't solve before.
This is not one of those times.
I've always wondered what actually breaks if 1 is prime or conversely what defining 1 as not prime gives us. Got just far enough into my math degree before switching to CompSci to stay of of universities the rest of my life to want to know.
The biggest problem is that you lose unique prime factorization. With prime factorization, I get a unique representation of every positive integer. Let's consider a way to write positive integers in "base prime", similar to base 10 or base 2. I'll start counting from 1 and write numbers as a tuple of prime factors. Similar to base 10, "base prime" has an infinite set of 0s that we're leaving out for brevity (e.g. 19 = 0000019), although it's on the right side instead of the left.
The i th position in every tuple is the power of the i th prime in the factorization of that number. So 10 = (1, 0, 1) = 2^1 * 3^0 * 5^1. 84 would be (2, 1, 0, 1) = 2^2 * 3^1 * 5^0 * 7^1. If we have unique factorization, there is exactly one way to write every positive integer like this, and there are many insights we can gain from this factorization. If 1 is prime, then we can write 6 = 1^257 * 2^1 * 3^1, or any other power of 1 we like. We just gain nothing from it.There are often many equivalent ways to define any mathematical object, and I'm sure there are plenty of ways to define a prime number other than "its only factors are itself and 1". These other definitions are likely to obviously exclude 1. One obvious one is the set of basis coordinates in this "unique factorization" space that I just laid out here. And we're never really excluding or making a special case for 1, because 1's factorization is simply the absence of any powers -- empty set, all 0s, whatever you want to call it.
Keep in mind that "unique factorization" turns out to be very interesting in all sorts of other mathematical objects: rings, polynomials, symmetries, vector spaces, etc. They often have their own notion of "prime" or "primitive" objects and the correspondence with integer-primes is much cleaner if we don't consider 1 prime.
Some examples are in these comments, e.g. the Fundamental Theorem of Arithmetic. The Sieve of Eratosthenes is an amusing outcome, where 1 is the only prime if you take it literally.
But also mentioned elsewhere in the thread: if we declared 1 to be a prime, then many (I daresay "most") of our theorems would have to change "prime number" to "prime number greater than one".
If you defined 1 to be a prime but to not be odd then some theorems could stay the same.
Ha, yes, I was thinking of the theorems that refer to "odd prime" to exclude 2. :-)
This is the best answer.
We could declare 4 to be a prime number, and keep the rest of the definition the same. Instead of just saying "no", you could ask, "okay, what would that do for us?" If there isn't a good answer, then what's the point? And usually, you're not in the 1% of 1% of 1%.
I've been fascinated by numbers lately, and one of my go-to tools is a simple mobile app that calculates all the divisors of a given number. So I can determine prime numbers, and readily factor the non-primes. And it's been eye-opening.
Now I'm no crackpot numerologist, adding up the numerical values of Bill Gates' name, or telling you who shot JFK. But I can tell you that the main launch pad 39A at Cape Kennedy was not numbered by accident -- look it up in the Book of Psalms. And it's interesting how the city buses around here are numbered. For example, the 68xx series; I look up Psalm 68 and I can definitely imagine the bus singing that as it lumbers down the road -- can't you?
Back to primes -- if we consider the top numbers authorities of our times, such as the US Post Office, city planners, and the telephone company (circa 1970s). I ran a chunk of ZIP codes from Southern California and discovered that some are the factors of two quite large prime numbers. Others yield interesting factors. Once again I pull out my Book of Psalms.
There are plenty of other "hermeneutics" to interpret assigned numbers, especially street addresses. And as for phone numbers, I've gone back to figuring out "what do they spell" on a standard TouchTone keypad, because sometimes it's quite informative.
It's no accident, for example, that the hospital where I was born is located at 4077 5th Avenue. And that number assigned by city planners, many decades before M*A*S*H was written or went on TV. Significant nonetheless.
I also figured out a few prime numbers related to my own life, and others that are recurring tropes, just cropping up at interesting times. What's your social security number? Have you sort of broken it down and pondered if those numbers turned up again and again in your life? Every time I see a number now, I'm compulsively factoring it out in my head. Is it prime? It feels prime. I'll check it in the app later; try some mental math for now.
I'm also counting things more often now. How many spokes in a wheel? How many petals in a flower, especially a flower depicted in art. How many brick courses in that interesting wall they built? Plug any interesting numbers back into the divisors app. Finding the primes, find the factors, just ponder numeric coincidences. It's fun. So many signs and signals, hidden in plain sight before us. Buses singing Psalm 68 as they take on passengers. Launch pads singing Psalm 39 as Europa Clipper slips the surly bonds of Earth. What's on your telephone dial?
1 x 1 = 1
1 x 1 x 1 = 1
...
Not prime!
Depends on your definition of prime, by your reasoning, I could say 7 * 1 * 1 = 7, so it's not prime. Better to say a prime is any number with a set of divisors of length 2 including 1 and itself. If you want to exclude 1.
[dead]
I think we’ll need to wait for an answer to if there is a prime number generating function.
At that time we can determine if 1 is prime.
If it’s found that Eratosthenes’ sieve is the only prime generating function then we have our answer.
We'd have our answer in what way?
I apologize for the ambiguity, it’s apparent when you’ve read the article as it addresses this specific point.
Namely, if the sieve is the only generating function for all of the primes then 1 would need to be omitted as prime as removing its factors would remove every number, thus failing to generate the list of primes.
If you treat one as prime number when running the sieve algorithm, one is the only prime number that remains after you have removes all its multiples from the list of candidate numbers.