Having the ability to throw math heavy ML papers at the assistants and get simplified explanations / pseudocode back is absolutely amazing, as someone who's forgot most of what I learned in uni, 25+ years back and never really used it since.
The answer is you put the top mathematician in the world to do it, easy peasy.
“The argument used some p-adic algebraic number theory which was overkill for this problem. I then spent about half an hour converting the proof by hand into a more elementary proof, which I presented on the site.”
What’s the exchange rate for 30 minutes of Tao’s brain time in regular researcher’s time? 90 days? A year?
This is where LLMs shine for learning imo: throwing a paper in Claude and getting an overview then being able to ask questions.
Especially for fields that I didn’t study at the Bachelors or Masters level, like biology. Getting to engage with deeper research with a knowledgeable tutor assistant has enabled me to go deeper than I otherwise could.
Isn't there a risk that you're engaging with an inaccurate summarization? At some point inaccurate information is worse than no information.
Perhaps in low stakes situations it could at least guarantee some entertainment value. Though I worry that folks will get into high stakes situations without the tools to distinguish facts from smoothly worded slop.
There is, but there is an equal risk if you were to engage about any topic with any teacher you know. Everyone has a bias, and as long as you dont base your worldview and decisions fully on one output you will be fine.
I've been doing this a fair amount recently, and way I manage it is: first, give the LLM the PDF and ask it to summarize + provide high-level reading points. Then read the paper with that context to verify details, and while doing so, ask the LLM follow-up questions (very helpful for topics I'm less familiar with). Typically, everything is either directly in the original paper or verifiable on the internet, so if something feels off then I'll dig into it. Through the course of ~20 papers, I've run into one or two erroneous statements made by the LLM.
To your point, it would be easy to accidentally accept things as true (especially the more subjective "why" things), but the hit rate is good enough that I'm still getting tons of value through this approach. With respect to mistakes, it's honestly not that different from learning something wrong from a friend or a teacher, which, frankly, happens all the time. So it pretty much comes down to the individual person's skepticism and desire for deep understanding, which usually will reveal such falsehoods.
I can probably process anything short and highlevel by myself in a reasonable time, and if I can’t, I will know, while the LLM will always simulate perfect understanding.
There is, but just ask it to cite the foundational material. A huge issue with reading papers in topics you don't know about is that you lack the prerequisite knowledge and without a professor in that field, it may be difficult to really build that. Chat GPT is a huge productivity boost. Just ask it to cite references and read those.
I'm not sure the exact dollar value of feeling safe enough to ask really stupid questions that I should already know the answer to and I'd be embarrassed if anyone saw me ask Claude, but it's more than I'm paying them. Maybe that's the enshittification play. Extra $20/month if you don't want it to sound judgey about your shit.
Math notation is high context, so it's great to just ask llm's to print out the low context version in something like lisp where I can read and decompose it quickly.
'Vibe formalizing' is a logical extension of 'vibe engineering' implemented by 'vibe coding'. Sometimes I have trouble with getting the individual puzzle pieces of a problem to fall into place, where a hypothetical 'Move 37 As A Service' to unify informal methods with mathematical rigor deserves to be explored!
I had put on the back burner some polyhedral compilation papers. I had read all the material, but some key questions still meant it was not possible for me to implement it. In particular, I was looking at barvinoks counting algorithm and did not understand why you needed to expand the polynomials in a pointed cone. However, chatgpt correctly led me through the reasoning. Could it have made a mistake? Of course. And it did. However, since my confusion meant that I was also wrong, bouncing the idea back and forth was really useful. Plus the ai bots are better at understanding your own particular points of confusion.
I'm not a mathematician, but how credible is that anti-Lean material? Are they marketing an alternative programmatic approach, as in they're anti-lean because "I got something else" or are they philosophically anti-Lean and have valid arguments?
It's mainly the latter, although the author makes half-hearted gestures as some sort of CAS (Computer Algebra System) being better.
It's not very credible. There are individual fragments that make sense but it's not consistent when taken together.
For example, by making reference to Godelian problems and his overall mistrust of infinitary structures, he's implicitly endorsing ultrafinitism (not just finitism because e.g. PRA which is the usual theory for finitary proofs also falls prey to Godel's incompleteness theorems). But this is inconsistent with his expressed support for CASes, which very happily manipulate structures that are meant to be infinitary.
He tries to justify this on the grounds that CASes only perform a finite number of symbol manipulations to arrive at an answer, but so too is true for Lean, otherwise typechecking would never terminate. Indeed this is true of any formal system you could run on a computer.
Leaving aside his inconsistent set of arguments for CAS over Lean (and there isn't really a strong distinction between the two honestly; you could argue that Lean and other dependently types proof assistants are just another kind of CAS), his implicit support of ultrafinitism already would require a huge amount of work to make applicable to a computer system. There isn't a consensus on the logical foundations of ultrafinitism yet so building out a proof checker that satisfies ultrafinitistic demands isn't even really well-defined and requires a huge amount of theory crafting.
And just for clarity, finitism is the notion that unboundedness is okay but actual infinities are suspect. E.g. it's okay to say "there are an infinite number of natural numbers" which is understood to be shorthand for "there is no bound on natural numbers" but it's not okay to treat the infinitary object N of all natural numbers as a real thing. So e.g. some finitists are okay with PA over PRA.
On the other hand ultrafinitists deny unboundedness and say that sufficiently large natural numbers simply do not exist (most commonly the operationalization of this is that the question of whether a number exists or not is a matter of computation that scales with the size of the number, if the computation has not completed we cannot have confidence the number exists, and hence sufficiently large numbers for which the relevant computations have not been completed do not exist). This means e.g. quantification or statements of the form "for all natural numbers..." are very dangerous and there's not a complete consensus yet on the correct formalization of this from an ultrafinitistic point of view (or whether such statements would ever be considered coherent).
It seems pretty obviously machine generated, and the appearance of the word "metaphysics" in the output suggests the prompt author didn't know what they were talking about to begin with.
I think Zeilberger is taken heavily out of context and confused with Norman Wildberger a lot; he certainly has some eccentric opinions but that one is not at all reflected in his blog's contents (which are largely things like "[particular paper] presents [conjecture/proof] that can be [resolved/shortened] by routine methods" that are only routine because of his decades of work), it's a shame that him being the go-to example of a crank seems to have become engrained into LLMs
I've had mixed results with AI on research mathematics. I've gotten it to auto-complete non-trivial arguments, and I've found some domains where it seems hopelessly lost. I think we're still at a point in history where mathematicians will not be replaced by AI and can only benefit by dabbling with it.
I've had similar results in both mathematics and programming. For one paper I was writing I wanted to try them and it was a fairly straightforward problem counting the number of permutations. I spent much more time trying to get the AI to figure it out than it took to actually solve it. Couldn't get it to do it even after I solved the problem. Similarly in coding I've had it fail to find trivial bugs like an undefined keyword, which would have easily been caught had I turned on ctags (because the keyword was inherited from a class, which made it so easy to miss). But similarly I've had them succeed on non-trivial tasks and those have been of great help, though nothing has ever been end-to-end just the AI.
So I agree. I think these tools are very helpful, but I also think it is unhelpful that people are trying to sell them as much more than they are. The over hype of them not only legitimizes any pushback but provides ammunition. I believe the truth is that they're helpful tools, but are still far from replacing expertise. I believe that if we try to oversell them then we run the risk of ruining their utility. If you promise the moon you have the deliver the moon. That's different from aiming for the moon and landing in the trees.
I still can't believe that we are in the era of 'star trek' like "Computer plot me a proof for this math problem" in my life time! Wish we could also do the same for "Beam me up scotty"
Thinking about such questions before we are capable of doing such an experiment at least with small animals is like discussing how many angels can stand on the point of a pin.
I was told by a hungarian, that hungarian written spelling and spoken pronunciation is pretty precisely aligned compared to, say, english. Except when it comes to names when it gets a bit random!
Why not do the bloke the decency to spell his name correctly? Those diacritics are important.
Anyway, I was told that Paul's name is very roughly pronounced by an anglophone as: "airdish".
I take it that that's a palatalized ending? I read your comment at first and was like "airdish" wtf? Then I palatalized the 'os' ending and realized oh yeah... that does sound kind of like airdish!
Depends. There are names that are "romanized" to Hungarian pronunciation rules, like Dosztojevszkij (Dostoevsky), or Kolumbusz Kristóf (Cristoforo Colombo - Hungarian puts the family name first), though it is no longer the practice, it's mostly used for historic names only. That is, Trump is written like that, and not as we would pronounce (something like "Trámp")
In general, if the source language has a latin alphabet, we try to stick to the original spelling in most cases, but it is not uncommon to replace non-Hungarian letters with the closest one. It's a bit more complicated in case of non-latin alphabets, especially Cyrillic due to a lot of shared history.
I was driving around tonight doing errands and while I was doing so, had a great conversation with chatgpt as to the intimate details of the llvm and GCC pipeline schedulers. It's a huge productivity boost. It has now taken notes for me for some compiler stuff I'm experimenting with. This would previously have been impossible.
Because he is smart enough to use the existing (frontier) tools to get good results and create a sort of collaborative environment that is novel for research maths.
As for collaboration I meant: https://terrytao.wordpress.com/2024/09/25/a-pilot-project-in... The issue with horizontal scaling of maths research is trust: if you don't know the author, it is more work to verify their work, especially non-formal proofs. Lean4 enables large projects be split up into pieces where lean can validate each intermediate result, so a much broader group of people can contribute pieces without jeopardizing the overall soundness.
Wiles's initial presentation of the proof had a serious flaw that killed the whole thing until he found a workaround. I don't remember how long it took him to get out of the jam, but I'm sure he would have handed his credit card to the Devil himself if he thought it might help.
People who don't take advantage of the best available tools and techniques don't get to that level to begin with.
Collaborative environment meaning that any PFY employed by the "AI" providers can read your most intimate thought processes and keep track of embarrassing failures or misconceptions.
The embarrassing failures or misconceptions of math experts with regards to research level mathematics? Definitely a serious problem.
Though by your "Perelman and Wiles didn't need "AI" assistance" comment, you'd surely be there on the sidelines to ridicule them for each and every single one. I guess maybe that's where your concerns are coming from?
I can practically see how these concerns of yours would suddenly evaporate if they started using self-hosted models instead... ... yeah, right, who are we kidding?
If mathematicians aren't occasionally saying something that's obviously wrong to a 300-level student, are they really pushing the envelope? I'm just a programmer, but I find that my, and my coworkers, biggest insights always come right after we've said something seriously dumb.
Thankfully it's mathematics, so people powerscaling their idols, deifying them at the detriment of others, and putting terms into quotes mockingly, is not what determines whether results hold or not. Perhaps the only field not fundamentally shackled by this type of quackery, even if people try their hardest from time to time to make it so.
It's fine, at least you admit that what you wrote was just to insult.
For people who at least pretend to care to not think in strawmans, it's been nearly six years, and their deus has never exited said machina (if it's ever been in there to begin with, or anywhere else).
“The argument used some p-adic algebraic number theory which was overkill for this problem. I then spent about half an hour converting the proof by hand into a more elementary proof, which I presented on the site.”
What’s the exchange rate for 30 minutes of Tao’s brain time in regular researcher’s time? 90 days? A year?
Especially for fields that I didn’t study at the Bachelors or Masters level, like biology. Getting to engage with deeper research with a knowledgeable tutor assistant has enabled me to go deeper than I otherwise could.
Perhaps in low stakes situations it could at least guarantee some entertainment value. Though I worry that folks will get into high stakes situations without the tools to distinguish facts from smoothly worded slop.
No it isnt.
You won't be able to verify everything taught from first principles, so do have to at some point give different sources different credibility I think.
To your point, it would be easy to accidentally accept things as true (especially the more subjective "why" things), but the hit rate is good enough that I'm still getting tons of value through this approach. With respect to mistakes, it's honestly not that different from learning something wrong from a friend or a teacher, which, frankly, happens all the time. So it pretty much comes down to the individual person's skepticism and desire for deep understanding, which usually will reveal such falsehoods.
The texts have to be short and high-level for the assistants to have any chance of accurately explaining them.
Even imperfect assistants increase leverage.
https://aristotle.harmonic.fun/
TIL: startup founded by Robin Hood CEO
It's not very credible. There are individual fragments that make sense but it's not consistent when taken together.
For example, by making reference to Godelian problems and his overall mistrust of infinitary structures, he's implicitly endorsing ultrafinitism (not just finitism because e.g. PRA which is the usual theory for finitary proofs also falls prey to Godel's incompleteness theorems). But this is inconsistent with his expressed support for CASes, which very happily manipulate structures that are meant to be infinitary.
He tries to justify this on the grounds that CASes only perform a finite number of symbol manipulations to arrive at an answer, but so too is true for Lean, otherwise typechecking would never terminate. Indeed this is true of any formal system you could run on a computer.
Leaving aside his inconsistent set of arguments for CAS over Lean (and there isn't really a strong distinction between the two honestly; you could argue that Lean and other dependently types proof assistants are just another kind of CAS), his implicit support of ultrafinitism already would require a huge amount of work to make applicable to a computer system. There isn't a consensus on the logical foundations of ultrafinitism yet so building out a proof checker that satisfies ultrafinitistic demands isn't even really well-defined and requires a huge amount of theory crafting.
And just for clarity, finitism is the notion that unboundedness is okay but actual infinities are suspect. E.g. it's okay to say "there are an infinite number of natural numbers" which is understood to be shorthand for "there is no bound on natural numbers" but it's not okay to treat the infinitary object N of all natural numbers as a real thing. So e.g. some finitists are okay with PA over PRA.
On the other hand ultrafinitists deny unboundedness and say that sufficiently large natural numbers simply do not exist (most commonly the operationalization of this is that the question of whether a number exists or not is a matter of computation that scales with the size of the number, if the computation has not completed we cannot have confidence the number exists, and hence sufficiently large numbers for which the relevant computations have not been completed do not exist). This means e.g. quantification or statements of the form "for all natural numbers..." are very dangerous and there's not a complete consensus yet on the correct formalization of this from an ultrafinitistic point of view (or whether such statements would ever be considered coherent).
Or the author is not natively fluent in English.
So I agree. I think these tools are very helpful, but I also think it is unhelpful that people are trying to sell them as much more than they are. The over hype of them not only legitimizes any pushback but provides ammunition. I believe the truth is that they're helpful tools, but are still far from replacing expertise. I believe that if we try to oversell them then we run the risk of ruining their utility. If you promise the moon you have the deliver the moon. That's different from aiming for the moon and landing in the trees.
You might die every time you do, though, so maybe not.
I was told by a hungarian, that hungarian written spelling and spoken pronunciation is pretty precisely aligned compared to, say, english. Except when it comes to names when it gets a bit random!
Why not do the bloke the decency to spell his name correctly? Those diacritics are important.
Anyway, I was told that Paul's name is very roughly pronounced by an anglophone as: "airdish".
A theorem both deep and profound States that every circle is round But in a paper by Erdos Written in Kurdish A counterexample is found
In general, if the source language has a latin alphabet, we try to stick to the original spelling in most cases, but it is not uncommon to replace non-Hungarian letters with the closest one. It's a bit more complicated in case of non-latin alphabets, especially Cyrillic due to a lot of shared history.
https://www.lotas.ai/erdos
People who don't take advantage of the best available tools and techniques don't get to that level to begin with.
Though by your "Perelman and Wiles didn't need "AI" assistance" comment, you'd surely be there on the sidelines to ridicule them for each and every single one. I guess maybe that's where your concerns are coming from?
I can practically see how these concerns of yours would suddenly evaporate if they started using self-hosted models instead... ... yeah, right, who are we kidding?
For people who at least pretend to care to not think in strawmans, it's been nearly six years, and their deus has never exited said machina (if it's ever been in there to begin with, or anywhere else).