DeepSeek v4

(api-docs.deepseek.com)

1125 points | by impact_sy 9 hours ago

93 comments

  • jari_mustonen 5 hours ago
    Open Source as it gets in this space, top notch developer documentation, and prices insanely low, while delivering frontier model capabilities. So basically, this is from hackers to hackers. Loving it!

    Also, note that there's zero CUDA dependency. It runs entirely on Huawei chips. In other words, Chinese ecosystem has delivered a complete AI stack. Like it or not, that's a big news. But what's there not to like when monopolies break down?

    • chvid 4 hours ago
      The incredible arrogance and hybris of the American initiated tech war - it is just a beautiful thing to see it slowly fall apart.

      The US-China contest aside - it is in the application layer llms will show their value. There the field, with llm commoditization and no clear monopolies, is wide open.

      There was a point in time where it looked like llms would the domain of a single well guarded monopoly - that would have been a very dark world. Luckily we are not there now and there is plenty of grounds for optimism.

      • sigmoid10 4 hours ago
        Still not sure how I feel about China of all places to control the only alternative AI stack, but I guess it's better than leaving everything to the US alone. If China ever feels emboldened enough to go for Taiwan and the US descends into complete chaos, the rest of the world running on AI will be at the mercy of authoritarian regimes. At the very least you can be sure noone is in this for the good of the people anymore. This is about who will dominate the world of tomorrow. And China has officially thrown their hat in the ring.
        • 2ndorderthought 37 minutes ago
          I don't see the issue. China hosts the alternatives or the only game in town for lots of technologies. China has every interest and right to create products. Not everything that comes out of China is some devious plan to do terrible things. It's people trying to make money just like you and me.

          I am not washing away the authoritarianism, but take a look at other economic super powers directionality. Or that of tech ceos as well. At least Chinese tech companies aren't going around praising wwii Germany, writing manifestos, and bombing children at school or fisherman on whims. It is difficult not to see more countries regardless of leadership putting their hat in the ring as a net positive. Especially if it increases sustainability and lowers the price, which this very clearly does. It's even open source...

        • Ladioss 4 hours ago
          I always find it an illuminating experience about the power of mass propaganda every time I see an American believe they somewhat have the moral high ground over China, despite starting a new war somewhere around the globe either for petrol or on behalf of Israel every six months.
          • kiba 4 hours ago
            Just because America is doing bad things doesn't mean China is good, or vice versa.
            • LinXitoW 2 hours ago
              Of course not, but that's never how Americans act. The commenter didn't say "I don't like that the only two serious competitors are from the USA and China", they ONLY called out China.

              It's a small difference, but important. Especially because that person is far more likely to be responsible (voting) for and profiting from USAs bad stuff.

              • hirako2000 1 hour ago
                In fact, unless the comment is from someone living in China: understands the politics, it would only be fair to critique the authoritarian aspects of the government they actually know.

                The issue is propagandists are typically brainwashed already.

                • amunozo 40 minutes ago
                  Plenty of people around the world know about the authoritarian aspects of the US way better than the Americans, as they suffer their consequences.
                • 0xDEAFBEAD 1 hour ago
                  Do you believe only Americans should be allowed to critique the American government?

                  I'm an American and I don't believe that.

                  • hirako2000 27 minutes ago
                    No I don't mean one needs to be American. The reciprocal isn't valid. I talked about China. Given the misinformation the "western emisphere" has been subject to, I would find it dubious to get the echoes of what mainstream media portrays it as, even though there are elements of truth in what most people believe.

                    The U.S politics are easier to understand from the outside. For one it's a democracy, a more transparent process despite a lot is happening behind curtains. I have no idea what North Koreans are able make of the U.S scene, I know for sure people in U.S and Europe are hardly able to comment on N.K.

                    tldr: I'm with you non Americans (and Americans) are perfectly able to critique the U.S with some valuable accuracy.

              • strangegecko 1 hour ago
                He said "At the very least you can be sure noone is in this for the good of the people anymore. This is about who will dominate the world of tomorrow.".

                I.e. he doesn't see the US as "the good guys" either.

                Pointing out the war threat from China isn't hypocritical just because you don't list all the war threats from the US at the same time.

            • razodactyl 2 hours ago
              I think a lot of us are blinded by our own propaganda. I would expect many Chinese geeks to have the same values as us for the greater good of humanity.
            • Lapel2742 3 hours ago
              > Just because America is doing bad things doesn't mean China is good, or vice versa.

              Of course not. When it comes to SOTA LLMs you have the choice between two bad options. For many, choosing the Chinese option is just choosing the lesser of two evils (and it's much cheaper).

              • eloisant 3 hours ago
                Why people always dismiss the European option?

                Mistral is right here, their models are in-between the cheap to run Chinese models and top of the line performances of US frontier models.

                • roenxi 1 hour ago
                  People are probably assuming that the trends from the last few decades continue. The EU fumbled semiconductors, production went to Asia. The EU fumbled the software revolution, the successes mainly came from the US. They fumbled the transition to smartphones despite the Nokia advantage. They missed tablets; seemed like they just didn't have the industrial vigour to make a serious attempt.

                  The safe money is they are going to be an also-ran for the AI revolution. They did manage to force Apple to switch from using lightening connectors to USB though so their wins can't just be laughed off. Maybe they'll surprise us but it'd be a welcome change from their usual routine.

                  • sofixa 24 minutes ago
                    > The EU fumbled semiconductors, production went to Asia

                    Production of state of the art semiconductors, yes. NXP, STMicro, Infineon are still there and massive in automotive, industrial, card chips, etc.

                    > The EU fumbled the software revolution, the successes mainly came from the US

                    Worldwide massive success, mostly yes. Most European countries have their local or regional success stories though.

                    > The safe money is they are going to be an also-ran for the AI revolution

                    Not really. Past performances, or lack thereof, are not indicative of future ones.

                    Mistral are pretty good and selling well in the enterprise space. Some of the best voice models are coming from France (Kyutai).

                    • joe_mamba 1 minute ago
                      >Production of state of the art semiconductors, yes. NXP, STMicro, Infineon are still there and massive in automotive, industrial, card chips, etc.

                      State of the art is where the big money is. The EU semi companies you listed are absent from there and only make low margin commodity parts. Hence the claim of EU fumbling semiconductors is correct. No, ASML is not enough for claiming EU superiority since the EUV light source is still US IP designed and manufactured.

                      >Worldwide massive success, mostly yes. Most European countries have their local or regional success stories though.

                      Worldwide success is where the big money is, and you need a lot of money for cutting edge research and experimentation to build the future. EU mom and pop shops aren't gonna make enough money to be able to afford risky ambitious ventures the likes of FAANGs have.

                • GistNoesis 9 minutes ago
                  Europe is always 10 years ahead in all theoretical aspects.

                  Then they need money.

                  So most of the talent flee or get bought, typical example in machine learning space is huggingface or fchollet.

                  Then European government plays catch-up and offer subventions, but at the same time makes rules to make sure companies don't threaten US dominance, or Asian manufacturing.

                  Mistral is typically playing catch the subsidy game.

                  Europe is constructed so that it can't win, but can "pick" the winner between scylla and charybdis, pest and cholera.

                • Lapel2742 2 hours ago
                  > Why people always dismiss the European option?

                  Mistral is good for many tasks where you do not need SOTA or near SOTA performance. They cannot compete if you do.

                • 3s 2 hours ago
                  It’s not top of the line and mostly not open source
                • john_minsk 2 hours ago
                  For a lot of people in the world Europe = USA
                  • benterix 2 hours ago
                    But this makes zero sense. Two different continents, values systems, law systems. Not to mention the current USA administration is openly hostile to Europe. So why would anyone confuse the two.
                    • coliveira 28 minutes ago
                      Europe is at the mercy of the USA. Any difference in posture is due to local politics which can swing local elections, but European leaders are willing and eager to do what the US wants.
                    • quantum_state 1 hour ago
                      Europe will not be independent as long as there are US military bases there. Saying otherwise would be kidding oneself.
            • zaphirplane 1 hour ago
              I think people worry about monopolies, be it financial or otherwise
            • flossly 1 hour ago
              > Just because America is doing bad things doesn't mean China is good, or vice versa.

              When someone points out hypocrisy, this is "the answer", it seems. But it is just a statement, not a rebuttal of the hypocrisy that was pointed out.

              Hypocrisy is still hypocrisy.

              And bad things are bad things. Yet no amount of propaganda (red scare, "eew dictatorship", Uyger-genocide, Taiwan threat) can convince me that the China is as evil (or more evil) than the US-Israel alliance of the the last 50 years.

              • strangegecko 1 hour ago
                Hypocrisy would be if the person only points out Chinese authoritarianism without acknowledging problems e.g. in US policy.

                Not mentioning US problems every time they criticize CCP problems is not automatically hypocrisy, and this idea basically means you cannot criticize anything without criticizing everything someone considers just as bad or worse at the same time.

                Calling a discussion on China hypocritical because it doesn't say "but US worse" is essentially trying to build in whataboutism into every discussion.

                It's a symptom of increasing polarization and part of the problem.

            • cpursley 2 hours ago
              Yeah, idk this looks pretty good and they ain't bombing anyone nor trying to spread global communism USSRs style:

              https://www.youtube.com/watch?v=P7W20hdgWXY

              I think I'll take the open AI models, innovative high quality EVs and cheap solar panels, please.

          • nipponese 39 minutes ago
            The U.S. is not the country conducting amoral behavior with terrorist regimes for oil, that’s China.

            We conduct amoral behavior with terrorist regimes for dollars.

          • vsgherzi 3 hours ago
            Not about moral high ground. Ones a democracy one isn’t.
            • spaceman_2020 2 hours ago
              Your democracy has consistently voted senile 75 year olds for 3-elections now

              The current president - who Americans voted for twice - is heavily accused of being a pedophile and has reneged on every one of his poll promise

              Really not the best advertisement for democracy

              • wiseowise 2 hours ago
                The difference is that there was (at least an illusion of) choice. Nobody said that it is a perfect system. And Trump will be gone in 3 years, while Putin and Xi will stay in power until their death.
                • spaceman_2020 1 hour ago
                  I don't understand why Americans continue believing that democracy is the only way for every population in the world

                  Why would Russians want democracy? Or the Chinese, for that matter? There have been zero democratic impulses in their societies across hundreds, even thousands of years.

                  The west needs to rest its democratizing mission and accept that every society is fundamentally different

                  My country (India) got a "thriving" democracy, but because there is no real democratic impulse in the society, everything on the ground has devolved into what the society was always like - quasi-feudal bureaucracy

                  • hootz 1 hour ago
                    >I don't understand why Americans continue believing that democracy is the only way for every population in the world

                    Well, ideology. I believe my way is the only way for every population in the world too, and I fight for it to happen. Of course, each place adapts to their own condition, but I believe my core ideology is the way for humanity as a whole, and I believe it is the same for people who defend western american-style democracy.

                  • nailer 1 hour ago
                    > Or the Chinese, for that matter?

                    The marched for it en masse in 1989?

                    Russians and Chinese are also people. They deserve to rule themselves.

                    • jyscao 59 minutes ago
                      An ideologically driven subset of urban educated youths that was proportionally a tiny subset of the entire Chinese population marched for it in 1989. FTFY.

                      They are ruling themselves in the sense that their governing systems are emergent consequences of their own cultures. All peoples ultimately deserve the governments they have.

                  • sofixa 20 minutes ago
                    > I don't understand why Americans continue believing that democracy is the only way for every population in the world

                    It's not Americans, it's educated people who believe in personal liberties.

                    > Why would Russians want democracy

                    Because they would have a choice if they want to be robbed blind by a bunch of oligarchs, and if they want to be sanctioned off from the world because the supreme leader decided he wants to kill and maim a million Russians to achieve nothing more than killing Ukrainian civillians.

                    > There have been zero democratic impulses in their societies across hundreds, even thousands of years

                    Absurdly bad historic revisionism. Russia had democratic impulses in 1917 and 1990, both hijacked and went nowhere. China's 1911 revolution was also overtly democratic in nature, but was also hijacked.

                    • spaceman_2020 1 minute ago
                      > It's not Americans, it's educated people who believe in personal liberties.

                      I find this attitude deeply parochial and colonial. Who are these so-called "educated people" (most of whom would be in western developed nations) to decide what sort of governance system a country should have?

                      The democratic revolution in America and France came from its own people. If the Russians or the Chinese want democracy, they'll get it on their own

                      Western hand-wringing about the "lack of democracy" in foreign (usually poorer) countries is just concern-colonialism. I think most of these educated people should focus on their own countries and let the rest of the world be

                • cycomanic 1 hour ago
                  At some point I saw an analysis that looked at the policy/political differences between the different fractions of the Chinese communist party and compared them to the spread in a western parliament (I don't remember which one I think US or UK). They found that the spread was very similar. With that I'm not saying that the Chinese system is better, just that these statements are not as straight-forward as one things.

                  I think a much better metric is suppression of dissent, human rights records etc., not (the illusion of) choice at the poll booth once every 4 years.

                  • TheOtherHobbes 1 hour ago
                    The marketing pitch of Western "democracy" has always been that you can criticise your government freely and the government won't jail you or murder you.

                    Also, consumer goods.

                    The voting and multiple-branches-checks-and-balances elements are sidelines.

                    Currently none of those promises are true in the US. The government is murdering and jailing people for whimsical and self-indulgent reasons, the consumer economy is about to crash, and the only checks-and-balances are the checks going straight to the Emperor's private accounts.

                    To be fair, there's some judicial pushback, and some political friction.

                    But Senate and Congress are wholly captured, the opposition is flaccid and foreign-funded, media independence is a myth, and the last time The People had any real influence on policy was the 70s. Possibly.

                    I have no idea if China is "better". From a distance China seems to be doing much better at building useful things and making long term plans.

                    But ruling cliques always seem to end up being run by psychopaths, so my expectations for humanity from China's rulers aren't any higher than those for the US.

            • dspillett 2 hours ago
              I'm going off democracy, at least how it is currently implemented. It is proving far too easy to pervert.

              It turns out that the people will vote for some terrible things in order to get that one petty little thing a given candidate promises and they want, or because they don't like something specific about the other candidate(s). And of course many may later say “well, I didn't vote for that” when they quite demonstrably did.

              • benterix 2 hours ago
                Well, the politicians learned how to game the system well. Now people need to learn how to game the politicians. A formal verification process of pre-election promises would be a good start.
                • iso1631 1 hour ago
                  Nobody cares that politicians don't keep pre-election promises. And in most cases they shouldn't, circumstances change. You can have no intention of doing something, then something else happens, and you change your mind.

                  The problem is that people put stock in pre-election promises, rather than voting for the character of the person they want to represent them.

              • iso1631 1 hour ago
                > When a measure becomes a target, it ceases to be a good measure

                The measure is the number of votes. "What shall we have for dinner" measures things, there's no target in a "curry vs pizza vs thai" poll, and it doesn't really matter, the target is a nice night in with a film.

                However with politics, getting power is the goal, thus the number of votes is thus the target, and thus its not good at measuring what the country actually wants, just who can best get the most votes.

                This isn't new, but modern brainwashing allows manipulation at a scale hitherto unseen.

            • makingstuffs 2 hours ago
              How can there be democracy in an environment where freedom of thought is all but nullified due to social manipulation through mainstream media. Calling something ‘free’ doesn’t make it so.

              The reality is that the term democracy in western society has essentially become meaningless due to the swathes of algorithmic manipulation which occurs every second of everyday through every possible digital medium.

            • Al-Khwarizmi 54 minutes ago
              The moral weight of democracy is heavily overrated. Of course democracy is better than autocracy, all other things being equal. But I don't think a democracy that starts wars and bombs a new country every other year is morally superior to any relatively peaceful autocracy. Rather the opposite.
            • sscaryterry 2 hours ago
              "Democracy is the worst form of government, except for all the others that have been tried". Winston Churchill
              • jampekka 1 hour ago
                Socialism with Chinese Characteristics had not been tried at that point :)
                • usrnm 12 minutes ago
                  To be fair, Deng Xiaoping's reforms were based on the older New Economic Policy or NEP from the 1920s USSR, so it had been tried at that point. It was scrapped in the USSR for other reasons, not because it failed.
                • crimsoneer 58 minutes ago
                  The word you're looking for is dictatorship, and it is not new.
                • sscaryterry 1 hour ago
                  Exactly, maybe we've got it all wrong :)
            • lmz 3 hours ago
              So that means the people are complicit in whatever wars the US started. Not sure if better or worse.
              • nuancebydefault 1 hour ago
                A lot of people voted for someone who was known to be an evil crook. It was very clear that he got into politics for praising his own ego. They voted against 'the good' in the hope for their own benefit and against that of the world. If they did not 'expect' the current state of affairs then they just refused to listen to their own heart.
            • chvid 2 hours ago
              Germany was (formally) a democracy when it fought the Soviets.
            • sucrosesucrose 2 hours ago
              And why should anyone prefer a democracy over any other form of government? Doesn't it depend on the philosophy of each People?
              • wiseowise 2 hours ago
                > created: 18 minutes ago

                Right.

                • hootz 1 hour ago
                  He didn't even say anything outrageous, he's just participating in the discussion. People can create accounts to be able to reply to a discussion, even throwaways.
            • Jackpillar 2 hours ago
              Democracy is a stretch
            • monadgonad 2 hours ago
              "Not about moral high ground. One's an ideology my morals agree with, one isn't."
              • wiseowise 2 hours ago
                Is believing people should have a choice a moral high ground now?
                • modo_mario 30 minutes ago
                  You have a 2 party system where on many fronts both parties tow (almost) the same line and roughly behave like a oneparty system.
                • kaoD 57 minutes ago
                  No, but believing our so-called "democracy" (quotes intended, read: "21st century western systems") is how you give people "a choice" is the moral high ground. That is your axiom, but it's often touted as a tautology.

                  The name says "demos" and "kratos" but names are names, not facts.

                  There are many ways to give people a choice and this one has proven to be quite ineffective at that, as it slowly devolved into a plutocracy/oligarchy. Iron law of oligarchy, yadda yadda.

                  What they are very effective at though: crushing dissent, calming the masses with a reassuring illusion of choice, and touting itself as the "one true way".

                  When I look at the outcomes I don't see any semblance of democracy, only a ritual dance/theatre show every 4 years. A farce as big as the "democratic" instruments on the PRC.

                  There's a reason this "democracy" is very diligent at discouraging association and unionizing. Those give actual power to the people (and with power comes choice). That's dangerous. People might start believing they can actually influence the outcomes.

                  "Don't blame me - I voted for Kodos"

                  • sofixa 15 minutes ago
                    > our so-called "democracy" (quotes intended, read: "21st century western systems")

                    Do not conflate the broken American political system, the semi-broken British one, and the whole rest of the "west". Each country has its own political system, and they are wildly different.

                    > crushing dissent

                    Democracies are good at crushing dissent? Compared to other political systems? That's just not true. All other political systems rely on universal truth and unwavering trust in a person / religion / clique of people, who can do no wrong and can never be criticised.

                    > There's a reason this "democracy" is very diligent at discouraging association and unionizing

                    What? You are probably talking about a specific democracy, and the most broken one at that.

                    • kaoD 4 minutes ago
                      > and they are wildly different

                      As someone from the "whole rest of the west", no, they're not different at all. Very minor details change, but the net outcome is the exact same.

                      You can't escape the iron law of oligarchy.

                      > Democracies are good at crushing dissent?

                      They're not only good: they are the best. You doing need to curb dissent by violence if you discourage dissent by manipulation.

                      "What are you, antidemocratic!?"

            • drcongo 2 hours ago
              Can you clarify which is which?
              • sigmoid10 2 hours ago
                Chinese propaganda seems to hit very hard these days. If you really don't know, you seriously need to check what media you are consuming. Yes, the US has huge problems, many old and some new, but on a serious technical level the answer is (at least for now) 100% clear.
                • mysecretaccount 32 minutes ago
                  > Chinese propaganda seems to hit very hard these days.

                  Assuming that everyone who disagrees with you is a propagandized bot is a terrible way to live. You will not learn.

          • wiseowise 2 hours ago
            What makes you think they’re American?
          • mrkramer 2 hours ago
            All empires are to some degree evil because their agenda is to dominate weaker peoples and nations. They almost all committed crimes against humanity and genocides if you look retrospectively from the todays point of view. Even our beloved Roman Empire that the Western civilization is built upon was genocidal empire.
            • benterix 2 hours ago
              Not sure if we can call it "beloved". For sure respected for what it did to build the base of modern civilization, but we are aware of its dark sides. And probably Nero would be an excellent example of what can happen to the empire and its people when a crazy person becomes its ruler.
          • Der_Einzige 1 hour ago
            One province of China has enough hellish nightmarish bullshit going on caused by the CCP that we maintain total moral superiority over them. It’s not even a question to anyone except “fellow travelers”.
          • melagonster 3 hours ago
            [dead]
          • OCASMv2 3 hours ago
            [flagged]
          • willsmith72 3 hours ago
            [flagged]
            • latexr 3 hours ago
              > That doesn't mean it's positive in human rights

              Neither is the US, land of slaves, segregation, and the KKK. They did seem to get better there for a few of decades, but sure are working hard to return to their roots.

            • birdsongs 3 hours ago
              > That doesn't mean it's positive in human rights

              Isn't the US building mass detention camps right now for all the brown people there? And arresting / detaining / demanding papers from any and everyone? With federal agents killing civilians?

              Don't get me wrong, China is also horrible here, they have their own camps.

              But pretending the US is positive wrt human rights is a wild take in 2026.

              • Levitz 3 hours ago
                >Isn't the US building mass detention camps right now for all the brown people there?

                No, it is not, but the freedom of speech protections the US has (that China doesn't) allow for such commentary.

              • sam_goody 3 hours ago
                > sn't the US building mass detention camps right now for all the brown people there?

                Why would you think that?

                > And arresting / detaining / demanding papers from any and everyone?

                I have lots of friends from outside the U.S. that come regularly and don't find it onerous. Maybe it depends where you are coming from?

                > With federal agents killing civilians?

                OK, I agree that there are issues, and even very serious ones. Obviously, not on the level of China, but still serious issues. Nonetheless, what you see on left leaning media is not representative of what is happening on the ground throughout the U.S. Not even close.

                IMO, the US is definitely positive wrt human rights. There are issues, but you can go to a No Kings protest, and live your life happily without issues, and it is hard to find another country that is nearly as forgiving. And it at least has people trying to spread concepts of individual liberty, vs most countries in Europe, almost all countries in Asia, and ALL Muslim countries, that are leaning to removing individual rights.

              • mapcars 3 hours ago
                >Isn't the US building mass detention camps right now for all the brown people there?

                No? Its for illegal people, regardless of color. Just so happened that most illegals come from specific places

                • drcongo 3 hours ago
                  kein Mensch ist illegal
            • me551ah 3 hours ago
              With the number of wars that the US have waged over the years including in Vietnam, Iran and supporting Israel. I don’t think even the US has done a stellar job in defending human rights.

              If you meant American citizen human rights, then you’re correct.

              • latexr 3 hours ago
                > If you meant American citizen human rights, then you’re correct.

                Not even that. ICE has already killed US citizens, they no longer prohibit segregation, trans people were banned from the military, and many more. All of those affect American citizens.

            • tw1984 3 hours ago
              > That doesn't mean it's positive in human rights

              How about your pack up your arrogance and stop defining human rights for me and other 1.4 billion Chinese?

              • fisf 6 minutes ago
                Well, the National People's Congress / CCP define and frame that practically for you.

                It's not like 1.4 billion Chinese have much say in that.

                If I am wrong, please remind me again how much say Chinese people had on the escape hatches of Article 51 in your constitution.

            • OtomotO 3 hours ago
              How positive for the human rights of the people of invaded countries was the US?

              Ask around in Vietnam, Iraq, Syria and countless more countries around the world.

            • epolanski 3 hours ago
              [dead]
          • FrojoS 3 hours ago
            [flagged]
            • subdude 3 hours ago
              I agree, that's why Iran is correct to arm and defend themselves against Israel and the US.
            • samrus 2 hours ago
              Yeah, those 8 year old girls had been 2 weeks away from developing a nuke. Had been since 1997 im told
          • rhubarbtree 4 hours ago
            [flagged]
            • fastasucan 3 hours ago
              Not very democratic to invade other countries on the whim of a president.
            • JumpCrisscross 4 hours ago
              > they said democratic

              They didn't even say that. They only said China playing is "better than leaving everything to the US alone."

            • Cthulhu_ 4 hours ago
              For now indeed, the people that want to get rid of it are currently in power.

              The US was one of the first democracies in the world, and many countries followed suit. But the US hasn't kept up, and now the powers that be have exploited the weaknesses in the system. With arguably the biggest one being giving the president too much power (appointing supreme court justices, executive orders, etc).

            • jack_pp 3 hours ago
              Democracy in most of the countries is just theater. Trump promised no more wars iirc.

              Don't get me wrong, I'd rather live in a country without a million cameras that automatically fine me for crossing the street illegally but I don't actually deceive myself in thinking my vote counts for much.

              • culi 3 hours ago
                > I'd rather live in a country without a million cameras

                Are you talking about the US or China? https://deflock.org/

                China at least banned the use of facial recognition in public spaces by their supreme court in 2021 (and then further strengthened the ban in 2024 and also got the PIPL).

                If you're thinking of the "social credit" system please know that that's just an online meme. China's credit score system is not even nationalized and not nearly as invasive as the US's credit score system, which can sometimes determine whether or not someone is allowed to buy a house.

                Besides their own credit score system, the other thing that sometimes gets labelled the "social credit system" was an attempt they had to track the behavior of business leaders and elected politicians. Basically anyone who holds social power but not the common person. This also never really took off and was not ever nationalized/centralized.

              • palata 3 hours ago
                > I'd rather live in a country without a million cameras that automatically fine me for crossing the street illegally

                Agreed, but there again, the democracies have surveillance capitalism, it's not exactly like we're not being tracked.

              • phatfish 3 hours ago
                You let Trump and all the tech-bro shitheads win with that attitude unfortunately. Democracy is an ongoing battle.
            • cumshitpiss 4 hours ago
              [dead]
          • nailer 1 hour ago
            > I see an American believe they somewhat have the moral high ground over China

            The elected government of the US has the moral highground of over the regime that killed the KMT in it's weakened state after the KMT defeated Japan, went on a rampage against the educated classes, mowed down its own people with machineguns and tanks when they demanded a say in their own governments, and kidnaps people advocating for democracy to this day, including Jack Ma.

            > despite starting a new war... on behalf of Israel every six months.

            The war started when Hamas, funded by Iran, went on a murder and rape rampage against Israeli civilians.

            • iso1631 1 hour ago
              The origins of this war date back decades, arguably far longer.
        • chmod775 3 hours ago
          > Still not sure how I feel about China of all places to control the only alternative AI stack, but I guess it's better than leaving everything to the US alone.

          Fully agree. From a US perspective, that sucks. For everyone else it's pretty great.

          At this point the world's opinions of China are better than those of the US in some polls. One country invests and helps build infrastructure on a massive scale globally, the other alienates allies, causes countless conflicts, and openly threatens to end civilizations.

          Indeed, even if one isn't partial to China, there's reasons to be glad that an increasingly hostile US has powerful competition.

          > This is about who will dominate the world of tomorrow.

          For this you'd need a technological moat. So far the forerunners have burned a lot of money with no moat in sight. Right now Europe is happy just contributing on research and doing the bare-minimum to maintain the know-how. Building a frontier model would be lobbing money into the incinerator for something that will be outdated tomorrow. European investors are too careful for that - and in this case seem to be right.

          • benterix 2 hours ago
            Yeah it's confusing. I mean China has work camps for Uighurs and is very brutal on Tibetans etc. OTOH, their leader is not setting the world on fire every second week and compared to Trump seems like the paragon of reason on the surface. Of course we know it's a facade but man what crazy times to live in.
            • wallst07 1 hour ago
              If Trump acted more like Xi with regards to public speaking, but the actions were still the same, thing would be a lot different.

              My point is that Trump could sign/execute/order all the same exact things he's done, but if I just never spoke about it, or kept hidden like Chinese do, he would be compared MUCH differently.

              • AntiUSAbah 12 minutes ago
                If someone like Trump could talk smarter, he would be smarter and would do things smarter.

                That would also make him a lot more dangerous. After all in his first presidency he was still the man behind the biggest military on the planet but he knew shit on how to leverage this. In his second term he is even more loose but loose is tempertantrums and simple short sighted strategies. Easy to read, hard to accept.

            • HarHarVeryFunny 59 minutes ago
              You do realize that the US has a greater percentage of it's citizens in prison than any other country, including China?

              In the US its not the Uighurs or Tibetans who are being oppressed - it's the blacks and immigrants. The US elected a president who characterizes immigrants as rapist and murderers (while he himself is a convicted rapist, suspected pedophile, and wants to commit war crimes in Iran).

              The facade, believed by many Americans, is that USA is the land of the free, a democracy (despite no popular vote) one of the good guys, but actions say otherwise.

              • nipponese 8 minutes ago
                In the U.S. we don’t ethically cleanse in the name of political stability - we ethically cleanse in the name of economic growth.
        • Cthulhu_ 4 hours ago
          Moral stances aside, I'd argue it's healthy that the US gets competition from abroad. I appreciate the boost that the world is getting from China - infrastructure and construction projects are a huge benefit to economies. Their focus on green energy has caused a huge influx of affordable solar panels, home batteries, EVs, etcetera, helping reduce the dependency on fossil fuels - while the US and especially the other big money spenders in the middle east would rather the world remain fully dependent on them. But for the past years Europe and now Asia are feeling the pain from being overly reliant on that.

          China's policies and government aren't morally defensible and I do fear that they will become more aggressive in spreading their influence and policies onto other countries, but from an economic standpoint what they're doing is super effective. While the previous world power (the US) is stuck in infighting and going through cycles of fixing/undoing the previous administration's damages, instead of planning ahead.

        • mft_ 2 hours ago
          You’re right… but that’s on the rest of the world not getting their shit together.

          It’s this sort of example (and not properly supporting Ukraine, and not agreeing how to collectively deal with migrants, and not agreeing how to coordinate defence, and myriad other examples) that highlights what a pointless mess the EU is. It’s not a unified block - it’s 27 self-interested entities squabbling and playing petty power games, while totally failing to plan for the future with vision.

          The EU could/should have ensured that a European equivalent to OpenAI or Anthropic could thrive, and had competitive frontier models already; instead, they’re years and countless billions behind.

          • simgt 2 hours ago
            The EU pouring even more billions in this would just have meant pouring billions on US tech. China is winning on all fronts at this game because of the embargo, they end up even more vertically integrated as a result of it.
            • benterix 2 hours ago
              > The EU pouring even more billions in this would just have meant pouring billions on US tech.

              Which is crazy given that ASML is European.

        • chvid 2 hours ago
          The important thing is that LLMs are well-dispersed and the technology is relative open, much more open than it could have been. Alternative worthwhile LLMs will emerge from Europe and other non-US western countries once the economic incentives are there.
        • AntiUSAbah 15 minutes ago
          Come on... I was hoping that Mistral would do something and man that would be great as european but I hear NOTHING from them ever.

          I don't know what the problem is. Are we europeans to stupid? Do we just not have enough money / VC money? Are we not proud enough?

          :(

        • amunozo 4 hours ago
          Competition with the Soviet Union gave all the workers in the world better conditions, also advances in science and technology... (And risk of mutual destruction ;)), even if the USSR wasn't good.
        • SgtBastard 3 hours ago
          Mistral (a French company) shouldn’t be discounted.
        • cde-v 2 hours ago
          China doesn't even care about Taiwan anymore, their saber-rattling about it is a convenient distraction while they quietly make it completely irrelevant in the next few years.
          • iso1631 1 hour ago
            It does seem the idea is to get the Taiwanese people to want to choose to rejoin China by making China far better for people to live than Taiwan. Maybe that will be via democracy (i.e. China manipulates the people of Taiwan), or perhaps it will be genuine (i.e. China provides a far better lifestyle for the average person than Taiwan)
            • nipponese 26 minutes ago
              I have seen first hand how Chinese nationals behave when visiting Taiwan - it’s not pretty.

              Shared language and history aside, these two cultures are not in the same solar system when it comes to social norms and curtesies.

        • Danox 3 hours ago
          Isn’t Mistral close in the ballpark?
          • 2ndorderthought 23 minutes ago
            Mistral has a different focus. They aren't taking on trillions in debt risking their entire economy to produce useful products.

            I think they are leaders in the democratization of LLMs. Almost everyone has a computer right now that can run a useful variant of a Mistral model. I hope they keep their focus because what they are aiming for likely has the biggest impact on the average person and would be the best case scenario for the technology in general.

          • Lapel2742 3 hours ago
            AFAIK: Current Mistral models are not competitive with SOTA-models that come out of the USA or China. They are "good enough" for enterprise usage when you don't need SOTA performance.

            Their main selling point is: They are neither US-American nor Chinese. That's a real moat in today's world. I think at the moment they feel quite comfortable.

          • techsystems 2 hours ago
            There are no European models that come close. It's Korean models, then a UAE model K2, then Mistral.
          • eunos 3 hours ago
            They arent. Benchmark wise they are quite apart.
      • spaceman_2020 2 hours ago
        I've been baffled watching America double down on the same strategy even when it failed to produce results

        They sanctioned the hell out of Huawei and now Huawei is bigger than ever

        America is just not able to digest the idea that another country can be as good, if not better, at innovation

        • nipponese 22 minutes ago
          Because it worked on Japan in the 80s and 90s and sometimes “Americans” have a hard time telling the two cultures apart.
        • hirako2000 1 hour ago
          Deeper than the inability to digest. The incapability to comprehend it.

          China's fall in the 19th century came at them for the same reason. How could these European savages be stronger, thus better than us? Our intelligence service must be out of their mind.

        • 2ndorderthought 18 minutes ago
          America has been making short term and short sighted moves to try to widen a gap that cannot sustain. They have chosen the wrong strategy out of fear and greed. Cooperation is the right strategy. Isolationism will not work in the long term except for maybe the handful that drove it. The irony is that it's an anticompetitive and anticapitalist move to do what they have been doing, so it's not even on principal.
      • srameshc 1 hour ago
        As much I apprecite the sentiment, I think it is too early to declare that the well guareded monopoly is over. Yes, these models have answers, but don't expect all the large enterprises to switch to these models. The other aspect is scaling to serve these models will need a lot of time even if Huawei succeeds. Not all the Governments trust China and there will be a lot of resistance to work with these models eventually, even if cheaper.
      • lanthissa 3 hours ago
        not really, china has gone domestic for everything as soon as it could.

        its naive to think they would have stayed on a 'western' stack.

        Most of the time 'losing' isn't making a bad choice its being put in a situation where you have no good choices.

    • ifwinterco 5 hours ago
      As a Brit I'm here for it to be honest, I'm tired of America with everything that's going on.

      China is not perfect but a bit of competition is healthy and needed

      • jurgenburgen 4 hours ago
        I don’t know if we’re ahead of the curve but that tired feeling has started turning into hate here in the EU. I guess being threatened with invasion does that to you.

        The next decade is going to look very different with America Alone.

        • koe123 4 hours ago
          I grew up in the states when I was younger, always feeling some closeness to Americans even after I moved back to Europe.

          With all that goes on it has changed. Recently I sat on a plane near some Americans discussing their holidays here, and I noticed I felt contempt. Sitting their with insane privilege as their government torches the world.

          Individuals remain individuals, and one really ought not to be prejudice. However the lack of resistance I see in in the “land of the free” as their “democratic” institutions collapse just makes me believe they never cared at all. In France cars are torched if the pension age is raised. In America the rise facism apparently doesnt matter to them.

          • 0xDEAFBEAD 1 hour ago
            >However the lack of resistance I see in in the “land of the free” as their “democratic” institutions collapse just makes me believe they never cared at all.

            Largest protests in US history just in the past year:

            https://en.wikipedia.org/wiki/List_of_protests_and_demonstra...

            >insane privilege

            My sister and brother recently graduated from college, have been searching for jobs for over 6 months, they can't find anything. They're politically liberal Californians.

          • mettamage 4 hours ago
            From my small bubble it's not that. I'm Dutch, married to an American who now knows enough Dutch such that we can treat it as a secret language when we're in the US.

            My family in law seems to swing slightly republican. As a Dutchie, I could get some answers because I'm too naive not to talk about politics. So I got to probe a bit. What I simply found was that they'd say "I can't trust the news, none of it. Not CNN, not Fox News, nothing". Then I'd say "well in the Netherlands, I'd argue that while news outlets have their bias, you can trust them on basic factual reporting". She looked at me with a stare that I could only describe as "oh but honey, you're too young and naive to understand". To which I thought "you don't know the Netherlands. We're not perfect but we're nowhere near as deranged as what I'm seeing here".

            I think that explains a lot of it for some people. The trust in the media, all media, is completely broken. Trump has how many fellonies now? Can't trust it. Kamala is doing what now? All talk. DOGE is fixing the government? I fucking hope so! But can't trust the damn news. Whether they do or don't, they are always burning money, god damn bureaucrats.

            I feel that's the mindset that my family in law has.

            • zmmmmm 3 hours ago
              > I can't trust the news, none of it. Not CNN, not Fox News, nothing

              This view gets echoed here on HN a lot. I find it very strange to be honest, because I tune in to CNN and I see lots of bias in the commentary and editorial, but when it comes to factual reporting they are pretty straightforward and down to earth. It seems to me that the real issue is people don't seem to distinguish between reporting and editorial content / commentary. Stop watching that garbage and actually consume the factual content and analysis. Yeah it's dry and boring but if that isn't enough for you then it just shows you never cared about facts in the first place.

              • mettamage 23 minutes ago
                > but when it comes to factual reporting they are pretty straightforward and down to earth.

                No, not really. I mean for me, yea, sure, easy. But in the general case? It depends on who you are.

                The reason I trust CNN is because when a Dutch news source reports more or less the same thing, I can easily see the reporting matches with that of CNN. Because of this, I personally have some built up trust with CNN. When I look at Fox News, oh deary... it's nothing like what I see on the Dutch news.

                This is not something I do consciously, it's simply that I happen to watch Dutch news sometimes and I happen to see American news sometimes and it costs no effort for me to compare. Combine that then with that on HN I also sometimes see BBC and similar British venues (e.g. The Economist is also British I believe?), and now I suddenly have 3 countries worth of news sources.

                Many Americans don't really know that the UK exists other than that they rebelled against it. Many Americans almost haven't left their 20 mile radius world (many also did of course). But it's these people that I tend to have a lot of in my in-law family or however you call it (schoonfamilie in Dutch). I'm quite exotic to them in that sense, and definitely foreign. Thank god they have some Dutch roots.

                Point being: with that mindset, you're not checking out what the BBC has to say on a topic. You're checking American news, not because of patriotism but simply because of that's all you know and going outside of what you know costs effort. And you already have a job to do, come home late, just want to watch your shows in the evening and that's it.

                I am by no means saying that this is representative for all Americans, it isn't. What I am saying is: I see this a lot in my slice of the US. The reason I'm sharing it is because what my in-law family is saying is definitely at a much more personal level than whatever conversation I've had with some random, but lovely, person from a hacker space or hacker house in San Francisco.

                Yet, I don't see this view a lot on the news. Nor do I hear Dutchies talking about it, they are simply out of the loop when it comes to a view like this. I don't know how prevalent it is, but if many people of a family of 50 to 100 people is in a situation like this, then my bet is that they aren't the only family.

              • InsideOutSanta 28 minutes ago
                Getting people to have an undifferentiated distrust of news organizations in general is an important aspect of technofeudalism.
            • fastasucan 3 hours ago
              I think this is spot on. "Every fault of america is just how it is in any society.". Nice way to just accept it.
            • JumpCrisscross 4 hours ago
              Out of curisoity, what is your wife's take?

              My running hypothesis has been the trust breakdown arises from social-media overexposure driving lazy nihilism, which in turn gave free reign to a uniquely-corrupt class of politicians. But I'm not sure how to neutrally evaluate that.

              • mettamage 17 minutes ago
                Will ask, can't promise an answer, but will post it as a child comment here (or edit this one if it is within one hour).
              • virgildotcodes 2 hours ago
                I think the collapse of public trust was very intentional, and the result of a much longer term effort than social media.

                The most famous examples are likely the tobacco industry spreading misinformation through self-funded studies and experts, and the fossil fuel industry doing the same to seed doubt about climate change. But of course we can think of countless examples of entire industries and individual large corporations pushing out misleading bullshit, threatening or outright killing journalists and activists to cover up their catastrophic fuckups and their chronic conscious excretion of negative externalities.

                This has all of course been going on since the dawn of time, but to focus on the last century in the US, we've seen all sorts of corporations and coalitions of rich and powerful people push misinformation into nearly every sector of our society - universities, science, journalism, politics, etc. in order to undermine confidence in shared facts, corrupt people's ability to discern whether or not something is fundamentally true, and sow confusion so that they can continue to operate in perpetuity in this chaotic maelstrom of doubt.

                Lots of capture of government towards these ends as well, we can look at the concomitant constant cuts to education in order to weaken people's understanding of the world and ability to think critically. The revocation of the Fairness Doctrine was probably a step change, and Trump represents the sharpest recent escalation of all this.

                From day one, he's done everything he can to shred any collective notion of shared objective truth. Anything he doesn't like is fake news, and the idea that the media is lying, scientists are lying, experts are lying, and institutions are lying, he has spread so fucking successfully through society, to the point where Americans no longer have anything like a shared sense of reality.

                It seems like we're being reduced to tribes who are organized primarily around faith in various charismatic individuals.

                I think this is fundamentally the worst thing he's done, because it lays the foundation for virtually every other conceivable and inconceivable abuse. If people can't even agree on what is happening, we're fucked. People and institutions in power can do anything they want to whoever they want, because the public has lost their ability to even recognize the danger posed to them collectively and thus mount any resistance based on a shared sense of reality.

                Social media has definitely famously accelerated aspects of this like the fragmentation and the spread/magnification of fringe worldviews through echo chambers, but I think it's just one (and maybe this is controversial, but I'd be willing to be generous enough to think the 20something year old creators were too stupid to conceive of these long term consequences at first, but who knows, maybe not) element in a much longer and more intentional, malicious war against the many for the benefit of the few.

                • dkga 2 hours ago
                  Not only that, but in tandem the collapse of social capital in the US has been the result of a very intentional process (on top of the multidecade undercurrent of declining social capital). This according to Robert Putnam himself (sorry, don’t have time to find the source now but will add it later).
                • moshegramovsky 1 hour ago
                  Hannah Arendt wrote about the collapse of shared truth in societies. Trump is in some terrible company, literally and figuratively.
            • roer 3 hours ago
              This is quite interesting. I'm not sure what can to be done to reverse this? When you've reached a level of untrust where you deem trust itself naive, how can you recover?
              • mettamage 15 minutes ago
                Teach Americans to look at news sources in other countries?

                Shooting from the hip here. Feels like a duct tape hack on first thought.

                I mean that's what I do, subconsciously. I think a lot of Europeans do this because a lot of Europeans tend to speak English and then their actual native language, or something similar (e.g. I wonder how Swiss people experience this).

          • TomGarden 3 hours ago
            I a European who spent the last decade in America and I'm not sure I'd call Americans privileged compared to Europe. With money being the one means you have to be treated well in society, comparing it to Europe, America feels like the hunger games. Want healthcare (ie surviving)? Healthy food? To own your house? Welcome to the games
            • wiseowise 2 hours ago
              Europe doesn’t wage war right now. Their point is that Americans are talking about vacation while their troops invade and destroy Iran.
              • wallst07 1 hour ago
                OP needs to read up on war history and the US. Spoiler, it's been this way since WWII.
          • hirako2000 1 hour ago
            You felt contempt, can you imagine if you were Iraqi, Afghan, Syrian, Russian, Sudanese, Lebanese, Iranian, should I mention it: Palestinian.
          • n8cpdx 4 hours ago
            It’s not that it doesn’t matter to Americans. It is worse; half the population (or at least, half the voting population), is thrilled with the development of fascism. The other half has been ringing the alarm bells for well over a decade; it seems to make no difference.

            And you’re right, most Americans do not understand the privileges they have or give one single shit about democracy; it is just not a salient political issue. But eggs… don’t get me started on eggs.

            • KronisLV 2 hours ago
              > The other half has been ringing the alarm bells for well over a decade; it seems to make no difference.

              I feel like the issue there is that alarm bells in of themselves solve nothing. I won't extend that argument to one of its obvious conclusions, but instead I will say that efforts to attack education and critical thinking skills all contribute to people being susceptible to their democracy being corrupted and robbed blind - so having an educated populace with a sense of integrity and respect of human rights would help!

            • Cthulhu_ 4 hours ago
              It's probably a bit more nuanced than "half this, half that"; when you look at the facts, most voters aren't that extremist. A lot of votes vote one way or the other because they would simply never vote for the other.

              This is why the swing voters / swing states are so important in the US, because only a few million are flexible enough to switch sides.

              Of course the core issue is that there's a two party system; while I'm sure that in a healthy democracy the current republican and democrat parties would be the bigger ones, they wouldn't have a majority.

              • HarHarVeryFunny 54 minutes ago
                > This is why the swing voters / swing states are so important in the US, because only a few million are flexible enough to switch sides.

                Of course if the USA was an actual democracy, electing it's president by popular vote, then this would not be an issue - every vote would count to tip the balance in favor of who the people wanted to elect, not just the votes of the 20% fortunate enough to live in a "swing" state.

              • cma 17 minutes ago
                Parliamentary systems with more coalition involved instead of two party first-past-the-post can foster extremes too, like we're seeing in Israel.
              • drcongo 2 hours ago
                > A lot of votes vote one way or the other because they would simply never vote for the other.

                This, for me, is the crux. Politics is treated like a team sport in the US, you pick your side and cheer them on no matter what. And team sports in America are even more bananas - you grow up supporting the Brooklyn Dodgers and a few years later they're 2.5k miles away with a new name. This seems a perfect example of what's happened / happening to the Republican Party - it's not the same party any more, but everyone who tied their entire personality to cheering for the red team is still cheering for it as it burns the country to the ground. I predict that inside ten years it will have also had the name change and probably be run out of Florida or somewhere.

          • sterlind 3 hours ago
            not all of us are just "sitting here with insane privilege." it's quite dangerous for some of us right now.

            I'm trans. this Administration does not like us. after Charlie Kirk's murder, things got legitimately scary. Musk was retweeting people who called us "deranged bioweapons" who needed to be "forcibly institutionalized." NSPM-7 is surveilling and infiltrating trans organizations. the Heritage Foundation proposed labeling us as "ideological extremists," in the same category as neo-Nazis. if I'm arrested, I'll go to a men's prison where I'll likely be given to a violent inmate as his cellmate to "pacify" him (V-coding.)

            so yeah, I keep my head down. a lot of Jews kept their heads down in Germany in the '30s, you know? and just like then, it doesn't seem like other countries are too keen on taking us in as refugees. I hope that changes if things get bleak.

            • koe123 1 hour ago
              You make a good point, I’m sorry to generalize.
            • bmn__ 34 minutes ago
              That's what happens when your ilk leaned far out the Overton window and did not listen to the repeated warnings to leave the kids alone.

              Wanna play at being a radical without popular support? Don't see the necessity to police the mentally ill and moderate the extremists in your group? Well, FAFO is all the rest of society can say.

              • koe123 1 minute ago
                I suppose this could be rage bait, but would you justify the violence that the poster is afraid of also if someone is “ilk” of the other side of the aisle? E.g. white nationalist types?

                Does being “extreme” justify extra-judicial violence?

            • JonChesterfield 3 hours ago
              Get out seems an important priority. Good luck
          • jorvi 4 hours ago
            > In France cars are torched if the pension age is raised.

            This is not something to be proud of. You guys are giving yourself loaned freebies, retiring 5+ (!) years earlier than countries like BeNeLux and Germany, and are pretty much expecting the EU to eventually pick up the pieces which will drag us all down.

            Edit: always lovely when HN downvotes truths :)

            • the_gipsy 3 hours ago
              That's bullshit. Pensions are not a zero-sum game, and other countries don't have to pay for them.

              It just doesn't make sense to delay retirement while youth unemployment is such a big problem. We ALL should be fighting like France, in many aspects.

              • jorvi 3 hours ago
                Other countries don't directly pay for the pensions, but France is staring into a giant fiscal abyss because of their low retirement age (and other generous social benefits). Any attempt to change those results in the country being taken hostage by rioters, thus nothing changes.

                At some point France will be in too deep shit and will look to the EU to cover for them. We will all pay for that. And it is deeply unfair because other countries their citizens have accepted later retirement and more frugal benefits to keep their countries fiscally healthy.

                France could cover the fiscal hole in other ways, but taxing corporations and wealth at a higher rate also consistently ends up being blocked. And each year the hole gets deeper.

                • snakeboy 2 hours ago
                  > Any attempt to change those results in the country being taken hostage by rioters, thus nothing changes.

                  Your theory doesn't actually match with reality, given that Macron's retirement reform was passed into law despite protests. As currently enacted, the age of retirement in France will progressively increase from 62 until reaching 64 in 2030.

                  • jorvi 1 hour ago
                    It does match reality.

                    Reform wasn't passed, it was forced via a technicality after riots made it politically unpalatable, and it has put France in a governing crisis ever since.

                    Also, retirement in North, West and Central EU is 67+, not 64. Greece is at 67 too, although begrudgingly.

                    Again, I'd be equally happy if France covers the fiscal hole some other way, but I am not going to cover for a country that is willingly becoming the sick man of Europe because they want to live comfortably on borrowed time. Which, by the way, is a literal repeat of Greece its crisis. Time is a flat circle indeed.

              • FrojoS 3 hours ago
                It’s not bs. France is lobbying for “Eurobonds”, debt they can take at German interest rates and with Germans etc holding the bag, for about two decades now.

                https://youtu.be/tMd7EfFsPIc (Video claims France is against them, but if they ever were they are not anymore)

        • miroljub 3 hours ago
          [flagged]
      • nipponese 21 minutes ago
        It’s a shame your country couldn’t get back its technical edge.
      • hsiudh 4 hours ago
        "not perfect" is a _very_ big simplification of what China is though
        • rglullis 4 hours ago
          Isn't that the same to every major superpower?
          • FuckButtons 4 hours ago
            No. There is no moral equivalence with totalitarianism.
            • ywvcbk 4 hours ago
              Modern China isn't exactly totalitarian though and US is rapidly converging with China in that regard anyway.
              • odiroot 37 minutes ago
                Just as long as you don't openly mention the "three Ts".
              • bwv848 3 hours ago
                How totalitarian is exactly totalitarian? I asked chatgpt and it gave few points

                - Control goes beyond politics

                - A single, all-encompassing ideology

                - No meaningful private sphere

                - Mass mobilization and propaganda

                - Extensive surveillance and repression

                Seems like China is ticking all the boxes.

                • drcongo 2 hours ago
                  Honestly thought you were listing traits that the US has now till the last line.
                  • Levitz 1 hour ago
                    In what universe does the US not have "meaningful private sphere"?
                    • AntiUSAbah 7 minutes ago
                      Meta, Google and co control all your private data. GDPR is a european thing not an american or chinese thing.

                      CIA/FBI have their own massive data centers (see snowden) inkl. their own older bigger palantr style software.

                      Elon Musk was able to connect a Starlink server to your data and no one cared. He and his Duche aeh sry doge baby boys were able to access and download all Social Security Numbers.

                      If someone knows were Putin and all the other world leaders are at any given moment, I would bet its USA first than China if even because i don't think China cares that much about it than USA does.

                      And everyone out of scope of this, lives probably in some rural USA town were no one cares for you at all anyway, but thats the same thing as in China.

                  • bwv848 2 hours ago
                    Really laugh my ass off, so much whataboutism and American centrism when the debate is whether China is trustworthy on AI. Given your ignorance you should go and do your research, but I will help you a bit here.

                    - Control goes beyond politics

                    state corporation monopoly, 党支部 in private sector, crackdowns on NGOs and charities.

                    - A single, all-encompassing ideology

                    Party led, mandarin speaking Han Chinese nationalism, blended with Little Pink's unquestionable support for Xi and the party.

                    - No meaningful private sphere

                    社区网格员

                    - Mass mobilization and propaganda

                    We saw mobilizations on Chinese social media, attacking celebrities who don't openly say anything the party wants them to say. Mobilization in real life is rare though, cos it had shown it can backfire.

                    - Extensive surveillance and repression

                    Do I really need to explain this?

            • Hamuko 4 hours ago
              Which are the current nontotalitarian superpowers?
            • DiogenesKynikos 4 hours ago
              China is not totalitarian. Many people believe that China is still like 1950s-60s-era Maoist China, but it's just not.
              • FuckButtons 3 hours ago
                tiananmen square was in 1989. Hong Kong was snuffed out like a light. Covid saw people caged and sealed in their houses. You do not need to look back at the cultural revolution to see the prc for what it is.
                • mysecretaccount 34 minutes ago
                  > Hong Kong was snuffed out like a light.

                  I'm in Hong Kong right now. Seems like it is still here to me.

                • e4325f 2 hours ago
                  and the Kent State shootings were in 1970.

                  Being self-righteous and a yank doesn't make sense, country of war mongers, something that cant be said of China.

                  • FuckButtons 1 hour ago
                    Clearly I’ve hit a nerve if you’re stooping to whataboutism. Perhaps you should reflect on why that is.
                • DiogenesKynikos 3 hours ago
                  Is your contention that Hong Kong is also a totalitarian society? Have you been to Hong Kong in the last 5 years? I feel like people saying these sorts of things are just completely divorced from reality.

                  > Covid saw people caged and sealed in their houses.

                  No. There were a few incidents very early on, when everyone was (quite understandably) panicking about a new, deadly virus that nobody had ever seen before, when some local city officials barred the doors of people who had just come from Wuhan. That was a scandal inside China, and it was immediately reversed.

                  What China did do quite extensively was border quarantine, and during localized outbreaks (caused by cases that slipped through quarantine at the border), mass testing and quarantine measures. This was during a once-in-a-generation pandemic that killed millions of people. In China, these measures saved several million lives. The estimates are that China's overall death rate was about 25% that of the US, and these measures are the reason. By the way, Taiwan and Australia took nearly identical measures, and I very much doubt that you would call them totalitarian societies.

                  • bwv848 1 hour ago
                    > That was a scandal inside China, and it was immediately reversed.

                    Tell it to the people in Wuhan, and Shanghai, Urumqi, and other cities that had lockdowns. I was in Shanghai in 2022, I was confined to my apartment for nearly 3 months, you couldn't be more wrong.

            • NicoJuicy 4 hours ago
              That's also the current US administration.

              Luckily laws still stand somewhat.

              ( And Trump ain't smart enough)

              • fragmede 2 hours ago
                Trump's smarter than he lets on. He plays the buffoon in public, but he's smart enough to have gotten elected twice. Which is two times more than I've managed to.
          • scrollop 4 hours ago
            Whatbaoutism at it's finest.

            Have a peek at the fredom indx and the press freedom index for China. Guess where they stand?

            You know about the chinese internet firewall.

            You can't trust any data from the CCP.

            And please don't equate the aberration that is the Trump administration with "regular" US administrations (and this is coming from a non US person).

            • rglullis 3 hours ago
              People in China live under totalitarian rule, that much is true.

              But how free is the average North American, where getting sick can bring you and your family financial ruin? Where the "free press" is controlled by corporations who are also the main source of campaign funding for politicians? Where their urban spaces are designed to require you to have a car and promote complete atomized individuals?

            • amunozo 3 hours ago
              Regular US administrations that commited war crimes in half the world for decades. But apparently it only matters what they do in the US.
            • Revanche1367 3 hours ago
              Indexes made by Europeans and Americans to congratulate themselves are not reliable.
              • culi 3 hours ago
                Exactly. Even if you don't buy into western biases, it's heavily reliant on subjective perception surveys. Hardly proof of anything
            • rhubarbtree 4 hours ago
              You’re right, for now, but I think trump will try to turn America into a dictatorship.
            • _blk 4 hours ago
              ..you forgot to mention that any technology in China, foreign or domestic, can and will be used for and to the benefit of the -military- party.. But like someone posted: "not perfect" fits the bill.

              Check out the Sean Ryan Show with Palmer Luckey on China and military tech.

              • Thlom 2 hours ago
                Same goes for every country on earth?
        • IsTom 4 hours ago
          You can say the same about the US
        • hunter67 4 hours ago
          they compare it to fascist USA though
          • allan_s 4 hours ago
            Ask a gay, a black or a Japanese how it feels living in China.
            • cromka 3 hours ago
              Outside of gay people, the rest is your projection: they are homogenous society, racial problems are nonexistent. US is heavily heterogenous and despite that you segregated like a third of society at the time.

              Ridiculous take.

              • Levitz 1 hour ago
                >Outside of gay people, the rest is your projection: they are homogenous society, racial problems are nonexistent.

                You do know that Chinese people do go to other countries and that we all can see how insanely racist they can be right?

              • nailer 1 hour ago
                > they are homogenous society

                No, China is not homogenous.

                > racial problems are nonexistent

                Ask a non-Han about how they feel about that statement.

      • barrenko 4 hours ago
        This is such a tired argument, and morally repugnant. Where is the UK in the race, where is the EU? Lets get of our asses and stop moralizing.

        (China wiped out the entire EU industry through a "quiet" trade war since like the last 15 years, and we're not really talking about that aren't we...)

        • Cthulhu_ 3 hours ago
          Not so much a trade war as basic economic forces, and it's been going on for much longer than that. When infrastructure improves, companies and customers can look further to get their stuff done. If it's cheaper to do your industrial or manufacturing work abroad and have it transported to your country, that just happens.

          The powers that be try to slow this down by banning imports outright (you can't for example import American chicken into Europe because of food safety laws), or high import taxes (Chinese EVs have a 50% import tax in Europe and the US to protect the local car manufacturers. Which is fair because the Chinese EV manufacturers are state-sponsored so their prices are unfair. Then again, western companies get billions in investor money to push the prices down).

        • HatchedLake721 4 hours ago
          UK has the people but not the electric grid/infrastructure to compete.

          EU/France has Mistral.

          • whywhywhywhy 3 hours ago
            France has mistral and the energy infrastructure to compete, the EU has nothing.
        • calgoo 3 hours ago
          You mean the west handed their industry to china over the last 15 years? Its not like the US is any better off in this. The EU is not a country, so you can't talk about it as if it was. Each country has their own companies and industries. There is AI in Europe, and its growing, however we might not be as "energetic" about destroying our countries to build giant data centers to serve our billionaire overlords. That does not mean that there is no investment, there is, including a bunch of American corporations like Amazon. But there is also a lot of corruption and bribing (lobbying - lets call it what it really is, no more whitewashing) going on around that too.

          So again, stop referring to EU as a country, we are not, and it just annoys any Europeans as it comes of as "Americans who don't understand the world outside of the USA".

      • falkenstein 4 hours ago
        america is a continent. let’s take back our vocabulary (fellow european here). the little orange man shows very well what i mean when he started giving names to the gulf of mexico.
        • 0xDEAFBEAD 50 minutes ago
          "In English, North America is its own continent as is South America. The two can be collectively labeled the Americas or the Western hemisphere. Canadians frequently refer to themselves as North Americans and never as Americans. To insist this change is to demand the entire world’s lingua franca redefine words and thereby cause mass confusion for its speakers simply because doing so would be consistent with an arbitrary definition found in a foreign language."

          https://scrupulouspessimism.substack.com/p/america-means-the...

      • nailer 1 hour ago
        As someone that lived in Britain for 15 years until 2024, I'm not sure a nation with a GDP per capita lower than Poland, that is now poorer than every state in America, with a gang rape epidemic the government tried to suppress investigating should really concern itself with how other countries are ran.
      • lifeisstillgood 4 hours ago
        As a different Brit I do not accept such moral relativism.

        China’s governments actions are on a completely different level - for example:

        “””

        Since 2014, the government of the People's Republic of China has committed a series of ongoing human rights abuses against Uyghurs and other Turkic Muslim minorities in Xinjiang which has often been characterized as persecution or as genocide.

        “”” https://en.wikipedia.org/wiki/Persecution_of_Uyghurs_in_Chin...

        https://www.amnesty.org/en/location/asia-and-the-pacific/eas...

        Yes Trump is clearly trying Totalitarianism in America, but it is orders of magnitude different from what is happening in China.

        • amunozo 3 hours ago
          Why do we ignore all the human right abuses the US perform abroad? Iraq, Afghanistan, now Iran, Gaza and Lebanon through Israel, support to Saudi Arabia (which would not exist without the US), El Salvador... And inside it's also horrible with its treatment to immigrant.

          That should be at least comparable (if not worse) than what China is doing.

          • OCASMv2 3 hours ago
            Yes, El Salvador is so evil for imprisoning dangerous criminals and protecting innocent lives.
            • EmiMusso 1 hour ago
              El Salvador is blessed by evil criminals put away from the streets. It took thousands of those who you defend for a whole country to be free to enjoy tranquility and security. I was born there and I know better than you calling us evil
            • samrus 2 hours ago
              This is how china tried to justify its genocide against uighers. Was theboutrage against that just politically motivated? Or do americans only care about ethnic cleansing when theyre not the ones doing it
              • amunozo 1 hour ago
                They also don't care when done by their allies.
            • amunozo 3 hours ago
              Not for imprisoning, but for imprisoning them in draconian conditions, without proper judgements, etc. Have you seen those prisons for fuck sake?
        • cedws 2 hours ago
          There’s little to no evidence of such “genocide”, but I can go on YouTube to watch videos of the US bombing civilians in the Middle East.
          • wallst07 1 hour ago
            China is much better at hiding anything negative.

            It's a little insane to me people comparing negatives of US and China. I mean, the simple fact we're allowed to say just about anything we want that is critical of the administration on this forum, in English and nothing happens is clear there is no comparison.

            You have no idea the full breadth of the Chinese government because information is closed so quickly, in America it's all on display right in front.

        • tw1984 2 hours ago
          It is just shocking to hear such stuff from someone in the UK.
        • phatfish 2 hours ago
          The US supports the genocide in Gaza, it supports the bombing of Lebanon. The US itself has now started (another) war and bombed Iran.

          China is repressing the Uyghur and threatening Taiwan. I don't agree with these actions but is really "orders of magnitude" worse than the destruction the US facilitates in the Middle East?

          With Trump they are now openly hostile to European democracies, and ICE and doing their best at repression within the US.

          • chronc6393 2 hours ago
            > The US supports the genocide in Gaza, it supports the bombing of Lebanon. The US itself has now started (another) war and bombed Iran.

            > With Trump they are now openly hostile to European democracies, and ICE and doing their best at repression within the US.

            And what is Europe going to do about it?

            Boycott ChatGPT and Claude? Ha.

            • ifwinterco 1 hour ago
              That isn't really the point though, of course the UK can't stop these things by itself.

              The point is US "soft power" is eroding incredibly rapidly and this will have consequences

        • Markoff 3 hours ago
          genocide right, that's why Uighurs were allowed to have more children than majority of Chinese Han population /facepalm

          by your logic gentrification of neighborhoods with different people moving in is genocide as well

          Btw. remind me when last tiem China bombed school and killed 150+ school girls as your friend US?

          Or as Brit I hope you are proud about all the killing your country participated in in illegal invasion to Iraq based on fake news about WMD.

        • junnan 4 hours ago
          It's 2026 and people still believe this Uyghur genocide propaganda? In the meantime, Israel and the US have been killing people in the middle east for years, but china is "on a completely different level"?
      • timmmk 4 hours ago
        Fellow countryman here. I came here to say the same thing
    • dzonga 2 hours ago
      Jensen Huang said this in his recent interview - that China has the best/most engineers, it has the chip making ability, it's a good thing they wanna build on a Nvidia stack - but if you push them they will build on an all Chinese stack - but the interviewer was being a numb head who kept parroting the propaganda of Western tech supremacy
    • wener 54 minutes ago
      As a Chinese, I feel tiered, it's like the cold war, what is takes to keep competitive with every aspect, it's just another win for the country and the corp
    • accountofthaha 1 hour ago
      Does the 'zero CUDA dependency' also count for running it on my own device? I have an AMD card, older model. Would love to have a small version of this running for coding purposes.

      Really nice to see the Chinese are competing this strongly with the rest of the world. Competition is always nice for the end-consumer.

    • TrackerFF 4 hours ago
      Let's see how long it takes before the big US AI companies start lobbying to outright ban use of Chinese AI, even the open source / local models. For "national security" reasons, of course.
      • chronc6393 2 hours ago
        > Let's see how long it takes before the big US AI companies start lobbying to outright ban use of Chinese AI, even the open source / local models. For "national security" reasons, of course.

        Already do on EVs.

      • barnabee 4 hours ago
        Hopefully the US’ self imposed isolation will mean that when they do, they aren’t able to force the rest of the world to follow suit.
    • khalic 3 hours ago
      Open weight and open source are not the same
      • SquareWheel 2 hours ago
        This is a pretty banal comment at this point. Open source is the term used in the LLM community. It's common and understood. Nobody is going to release petabytes of copyrighted training data, so the distinction between open source vs weights is a rather pointless one.
        • stefan_ 2 hours ago
          First you steal all the code, then you want to redefine the term? Is it never enough with you AI guys? Where's the humility, where's the good?
    • d3Xt3r 2 hours ago
      > Also, note that there's zero CUDA dependency.

      So does this mean I can run this on AMD? And on a consumer 9000 series card?

      • HarHarVeryFunny 49 minutes ago
        If you don't have the source code then it makes no difference. If you have the weights and are running some model via llama.cpp, then you are using whatever API llama.pp is using, not the API that was used to train the model or that anyone else may be using to serve it.
      • randomgermanguy 2 hours ago
        If you found a rare 9000 card with 200+ GB of VRAM, sure
    • ibic 4 hours ago
      "Open Source" is the ultimate romance understood by software engineers.
    • kitd 4 hours ago
      I can't find any info on what exactly is open sourced.

      And in any case what does open source actually mean for an llm? It's not like you can look inside it to see what it's doing.

      • gommm 3 hours ago
        For me open source means that the entire training data is open sourced as well as the code used for training it otherwise it's open weight. You can run it where you like but it's a black box. Nomic's models are good example of opensource.
        • adammarples 2 hours ago
          Yes the weights are basically compiled code, compiled from the source data and the training code.
    • sudo_cowsay 5 hours ago
      I sometimes wonder if there are any security risks with using Chinese LLMs. Is there?
      • dalemhurley 4 hours ago
        Theoretically yes. It is entirely possible to poison the training data for a supply chain attack against vibe coders. The trick would be to make it extremely specific for a high value target so it is not picked up by a wide range of people. You could also target a specific open source project that is used by another widely used product.

        However there is so many factors involved beyond your control that it would not be a viable option compared to other possible security attacks.

        • 2ndorderthought 1 hour ago
          I believe this is possible but unlikely. I don't think a Chinese company trying to break down the US's stronghold in this field would do this short term. I think it is in their best interest to be cheaper, better, easier, and more trust worthy until competition looks silly.

          It's like suggesting BYD has a high likelihood of making their cars into weapons or something. It's not in the company or their countries interest to do that.

          Sure it could happen but I bet it would only happen in a targeted way. Why risk all credibility right now and engage in cyber warfare?

          • SecretDreams 14 minutes ago
            Need the "why not both?" meme here.

            BYD and Tesla have the same ability to brick their cars anywhere. It's less a "weapon" and more a way to cripple a subset of people overnight if they so choose. A general major downside of "connected" products.

        • wallst07 1 hour ago
          or more obvious like TikTok.

          Meaning Tiktok in the us is complete garbage for kids, almost like a virus. Whereas in China it's more educational.

        • mazurnification 3 hours ago
          But propaganda or non ethical marketing - why not? (That is bias toward pointing to certain provider(s)).
        • _blk 3 hours ago
          Would be interesting to hook up a much simpler LLM as fact checker to see when errors are introduced.

          If I had to place a hidden target it'd probably be around RNGs or publicly exposed services..

      • oliwarner 4 hours ago
        If there is, couldn't they exist in any model?

        I don't mean that flippantly. These things are dumped in the wild, used on common (largely) open source execution chains. If you find a software exploit, it's going to affect your population too.

        Wet exploits are a bit harder to track. I'd assume there are plenty of biases based on training material but who knows if these models have a MKUltra training programme integrated into them?

      • surgical_fire 18 minutes ago
        I sometimes wonder is there are any security risks with using LLMs from the US.
      • rhubarbtree 4 hours ago
        Backdooring software at scale.

        Spearphishing.

        Building reliance and exploiting it, through state subsidies, dumping, and market manipulation.

        Handicapping provision to the west for competitive advantage.

        • 2ndorderthought 1 hour ago
          Do you think doing any of those things with in the next year does more to forward China as a super power then say, dethroning all of the US hype around LLMs?

          Tech ceos are going around talking about how they will rule over employees and they will be unable to work in the future except for intelligence tokens. What if China commoditizes that without spending nearly as much resources? Kind of makes the trillions of dollars invested in the US a literal joke.

      • cassianoleal 4 hours ago
        What about LLMs from other origins? What makes them less risky?
      • eucyclos 4 hours ago
        From my experience, kinda the opposite? It's like Chinese software is... Harder to weaponize or hurt yourself on. Deepseek is definitely censored, but I've never caught it being dishonest in a sneaky way.
      • Hamuko 4 hours ago
        There must be. The executives at my company wouldn't have banned them all for no reason after all.
      • c0nstantien 4 hours ago
        [dead]
      • baal80spam 4 hours ago
        Is this a serious comment? It honestly reads like the last famous words.

        Of course there are risks.

    • frankdenbow 2 hours ago
      Jensen was saying this in that interview last week and the interviewer dismissed it.
    • laurentiurad 3 hours ago
      not a full AI stack. Training still runs on NVIDIA chips.
    • nailer 1 hour ago
      It's also not fake open source like Metas models - https://huggingface.co/deepseek-ai/DeepSeek-R1-0528, the weights are actually under a real open source license, (MIT), see https://huggingface.co/deepseek-ai/DeepSeek-R1-0528
    • slekker 5 hours ago
      But remember to not ask about Taiwan!
      • tigrezno 4 hours ago
        you talk like there isn't censorship in american AIs, like Israel topics.
        • unclejuan 3 hours ago
          To be fair I prefer the Chinese models censorship (yes, seriously) because if you ask certain topics they just don't answer instead of giving skewed answers.
        • swingboy 2 hours ago
          Ask a Chinese model about Taiwan, get denied. Ask an American model about Israel, get citizenship revoked and deported.
      • spaceman_2020 2 hours ago
        I can't wait for Taiwan to peacefully reunify with the mainland so the west with its constant war waging won't even have this talking point
        • wallst07 1 hour ago
          Are you Taiwanese? If not, your statement is a slap in the face to those citizens.
      • eunos 3 hours ago
        > China asks other country not to meddle with internal separatism > They also dont support separatism in my country

        Understandable.

      • spiderfarmer 5 hours ago
        Just ask it for a summary of the USA’s role in Iran, Gaza, Lebanon and its recent threats against Panama, Cuba and Greenland! It might be able to keep track.
        • teiferer 4 hours ago
          Does all this insane behavior from the US justify the Chinese censorship?
          • samrus 2 hours ago
            Of course not. But its disengenuous to only mention one like the US is clearly th lesser of 2 evils
        • libertine 4 hours ago
          Are you implying that western models were manipulated to hide and distort those events, like they do with the Tiananmen Square event, and Taiwan?
          • LinXitoW 1 hour ago
            Imagine eastern models were only trained on chinese official news. Would you call that an unbiased, uncensored LLM? Would it be practically different from just directly censoring the LLM?

            In the west, especially in the USA, rich capitalists and warmongers control the narrative put forth in the news, which gets fed to the LLMs, which results in what you could call auto-censorship.

            They manipulate the training data instead of censoring the model, but the result is the same.

          • spiderfarmer 4 hours ago
            Let's say I'm more outraged by the actual events.
          • vkou 4 hours ago
            Ask Gemini today if the United States is trying to destroy the nation of Iran, and it will feed you the (white-washed) party line, straight from the White House, with a bit of 'some people disagree' thrown in. No mention of America's threats of "Complete annihilation", "Killing a civlization", and all the rest.

            > Summary: The U.S. is currently engaged in an active war aimed at dismantling the Iranian government and its military capabilities, but it distinguishes this from destroying the country or its people. However, the humanitarian impact—including civilian casualties from airstrikes and the domestic crackdown by Iranian security forces—has led many international observers to warn that the campaign risks long-term instability and "state collapse" rather than a simple transition of power.

            It does do quite a bit better if you ask it about the genocide in Gaza, summarizing the case for it, and citing only token justifications from the guilty party.

            As of April 2026, Gemini is... For very obvious reasons, highly biased towards cultural consensus. If your cultural consensus is strong on some really messed up things, that's the outcome that it's going to give you.

            • teiferer 4 hours ago
              Isn't there a difference between the models output reflecting the mean of public discourse and the active adjustment of information by the government?

              Irrespective of how close the outcomes are to the actual facts, those two things have a different quality, don't they?

              • vkou 3 hours ago
                > Isn't there a difference between the models output reflecting the mean of public discourse and the active adjustment of information by the government?

                Not as much a difference as you would wish, as mean of public discourse is very actively managed, to our collective detriment, by a very small group of powerful people, which often includes the government. It's the nature of mass media, and the incestuous relationship between power and reach.

                They Thought They Were Free, and all that. By the time the 'mean of public discourse' centers on something incredibly stupid or awful, nobody can be arsed to figure out who planted that idea in our heads.

                • wallst07 1 hour ago
                  I don't think so, from my peer group I don't see this bias. It really is a difference of opinion. Now you can say half the country is brain washed by propaganda, but those people would say the same of you.

                  In reality it's only the terminally online that seem to create these narratives.

                  My point isn't to pick one side or the other, but agreeing with the other poster that the LLMs are not trained specifically to parrot administration propaganda.

          • jfjdhdjdbdb 4 hours ago
            History is by definition his story.
            • Bayart 3 hours ago
              It's not. It's an English pun on a Greek word, which roughly means "investigation".
      • Lionga 4 hours ago
        Quit a bit better then made to bomb little girl schools in Iran.
      • Markoff 3 hours ago
        pretty sure you can ask whatever you want and it will tell you official stance agreed by almost all countries in the world that Taiwan is part of China as it's recognized by your own country (I don't even know where are you from, but there is like 98% chance I'm right)
      • man4 4 hours ago
        [dead]
    • trvz 3 hours ago
      [flagged]
      • kkzz99 3 hours ago
        This is so unbelievable racist and deranged.
        • trvz 3 hours ago
          Rule of thumb is: half the statements out of capitalist states are false, all statements out of communist(-ish) ones are false. No racism, I’m perfectly willing to believe half of what comes out of Taiwan.
    • nsoonhui 1 hour ago
      Sorry, but exactly where did you get the idea that DS V4 runs entirely on Huawei?

      I asked DS itself and it denied this. It says: 'Nvidia chips are absolutely used for DeepSeek V4. The reality is a pragmatic "both-and" strategy, not an "either-or."'

      And based on the DS V4 technical report (https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...), it is mentioned that:

        We validated the fine-grained EP scheme on both NVIDIA GPUs and HUAWEI Ascend NPUs platforms. Compared against strong non-fused baselines, it achieves 1.50 ~ 1.73× speedup for general inference workloads, and up to 1.96× for latency-sensitive scenarios such as RL rollouts and high-speed agent serving.
      
      (In all honesty I relied on DS to give me the above, so I haven't vetted the information in full.)

      It mentions that Nvidia is still used. It doesn't even mention that Huawei chips are used in production — only in testing and validation, yes.

      • taytus 1 hour ago
        >I asked DS itself and it denied this

        Bro, seriously?

  • hodgehog11 4 hours ago
    There are quite a few comments here about benchmark and coding performance. I would like to offer some opinions regarding its capacity for mathematics problems in an active research setting.

    I have a collection of novel probability and statistics problems at the masters and PhD level with varying degrees of feasibility. My test suite involves running these problems through first (often with about 2-6 papers for context) and then requesting a rigorous proof as followup. Since the problems are pretty tough, there is no quantitative measure of performance here, I'm just judging based on how useful the output is toward outlining a solution that would hopefully become publishable.

    Just prior to this model, Gemini led the pack, with GPT-5 as a close second. No other model came anywhere near these two (no, not even Claude). Gemini would sometimes have incredible insight for some of the harder problems (insightful guesses on relevant procedures are often most useful in research), but both of them tend to struggle with outlining a concrete proof in a single followup prompt. This DeepSeek V4 Pro with max thinking does remarkably well here. I'm not seeing the same level of insights in the first response as Gemini (closer to GPT-5), but it often gets much better in the followup, and the proofs can be _very_ impressive; nearly complete in several cases.

    Given that both Gemini and DeepSeek also seem to lead on token performance, I'm guessing that might play a role in their capacity for these types of problems. It's probably more a matter of just how far they can get in a sensible computational budget.

    Despite what the benchmarks seem to show, this feels like a huge step up for open-weight models. Bravo to the DeepSeek team!

    • ozgune 2 hours ago
      I reviewed how DeepSeek V4-Pro, Kimi 2.6, Opus 4.6, and Opus 4.7 across the same AI benchmarks. All results are for Max editions, except for Kimi.

      Summary: Opus 4.6 forms the baseline all three are trying to beat. DeepSeek V4-Pro roughly matches it across the board, Kimi K2.6 edges it on agentic/coding benchmarks, and Opus 4.7 surpasses it on nearly everything except web search.

      DeepSeek V4-Pro Max shines in competitive coding benchmarks. However, it trails both Opus models on software engineering. Kimi K2.6 is remarkably competitive as an open-weight model. Its main weakness is in pure reasoning (GPQA, HMMT) where it trails Opus.

      Speculation: The DeepSeek team wanted to come out with a model that surpassed proprietary ones. However, OpenAI dropped 5.4 and 5.5 and Anthropic released Opus 4.6 and 4.7. So they chose to just release V4 and iterate on it.

      Basis for speculation? (i) The original reported timeline for the model was February. (ii) Their Hugging Face model card starts with "We present a preview version of DeepSeek-V4 series". (iii) V4 isn't multimodal yet (unlike the others) and their technical report states "We are also working on incorporating multimodal capabilities to our models."

    • lifty 4 hours ago
      Wondering how gpt 5.5 is doing in your test. Happy to hear that DeepSeek has good performance in your test, because my experience seems to correlate with yours, for the coding problems I am working on. Claude doesn't seem to be so good if you stray away from writing http handlers (the modern web app stack in its various incarnations).
      • hodgehog11 3 hours ago
        Very cool to hear there is agreement with (probably quite challenging?) coding problems as well.

        Just ran a couple of them through GPT 5.5, but this is a single attempt, so take any of this with a grain of salt. I'm on the Plus tier with memory off so each chat should have no memory of any other attempt (same goes for other models too).

        It seems to be getting more of the impressive insights that Gemini got and doing so much faster, but I'm having a really hard time getting it to spit out a proper lengthy proof in a single prompt, as it loves its "summaries". For the random matrix theory problems, it also doesn't seem to adhere to the notation used in the documents I give it, which is a bit weird. My general impression at the moment is that it is probably on par with Gemini for the important stuff, and both are a bit better than DeepSeek.

        I can't stress how much better these three models are than everything else though (at least in my type of math problems). Claude can't get anything nontrivial on any of the problems within ten (!!) minutes of thinking, so I have to shut it off before I run into usage limits. I have colleagues who love using Claude for tiny lemmas and things, so your mileage may vary, but it seems pretty bad at the hard stuff. Kimi and GLM are so vague as to be useless.

        • lifty 2 hours ago
          My work is on a p2p database with quite weird constraints and complex and emergent interactions between peers. So it's more a system design problem than coding. Chatgpt 5.x has been helping me close the loop slowly while opus did help me initially a lot but later was missing many of the important details, leading to going in circles to some degree. Still remains to be seen if this whole endeavour will be successful with the current class of models.
    • nibbleyou 4 hours ago
      Curious to know what kind of problems you are talking about here
      • hodgehog11 4 hours ago
        I don't want to give away too much due to anonymity reasons, but the problems are generally in the following areas (in order from hardest to easiest):

        - One problem on using quantum mechanics and C*-algebra techniques for non-Markovian stochastic processes. The interchange between the physics and probability languages often trips the models up, so pretty much everything tends to fail here.

        - Three problems in random matrix theory and free probability; these require strong combinatorial skills and a good understanding of novel definitions, requiring multiple papers for context.

        - One problem in saddle-point approximation; I've just recently put together a manuscript for this one with a masters student, so it isn't trivial either, but does not require as much insight.

        - One problem pertaining to bounds on integral probability metrics for time-series modelling.

        • pm2r 4 hours ago
          It would be wonderful to have a deeper insight, but I understand that you can disclose your identity (I understand that you work in applied research field, right ? )
          • hodgehog11 3 hours ago
            Yes, I do mostly applied work, but I come from a background in pure probability so I sometimes dabble in the fundamental stuff when the mood strikes.

            Happy to try to answer more specific questions if anyone has any, but yes, these are among my active research projects so there's only so much I can say.

  • XCSme 18 minutes ago
    Something is odd with this model, their blog posts shows REALLY good results, but in most other third-party benchmarks, people realize it's not really SOTA, even bellow Kimi K2.6 and GLM-5/5.1

    In my tests too[0], it doesn't reach top 10. One issue, which they also mentioned in their post, is that they can't really serve well the model at the moment, so V4-Pro is heavily rate-limited and gives a lot of timeout errors when I try to test it. This shouldn't be an issue though, considering the model is open-source, but it makes it hard to accurately test at the moment.

    [0]: https://aibenchy.com/compare/deepseek-deepseek-v4-flash-high...

  • throwa356262 5 hours ago
    Seriously, why can't huge companies like OpenAI and Google produce documentation that is half this good??

    https://api-docs.deepseek.com/guides/thinking_mode

    No BS, just a concise description of exactly what I need to write my own agent.

    • u_sama 4 hours ago
      I am very partial to Mistral's API docs https://docs.mistral.ai/api
    • vitorgrs 5 hours ago
      Meanwhile, they don't actually say which model you are running on Deepseek Chat website.
    • lykr0n 5 hours ago
      It's because they're optimizing for a different problem.

      Western Models are optimizing to be used as an interchangeable product. Chinese models are being optimizing to be built upon.

      • Barbing 4 hours ago
        >Western Models are optimizing to be used as an interchangeable product.

        But so much investment in their platforms, not just their APIs?

      • raincole 5 hours ago
        > Western Models are optimizing to be used as an interchangeable product

        Why? It sounds like the stupidest idea ever. Interchangeability = no lock-in = no moot.

        • setr 3 hours ago
          First you clone the API of the winner, because you want to siphon users from its install-base and offer de-risked switch over cost.

          Now that you’re winning, others start cloning your API to siphon your users.

          Now that you’re losing, you start cloning the current winner, who is probably a clone of your clone.

          Highly competitive markets tend to normalize, because lock-in is a cost you can’t charge and remain competitive. The customer holds power here, not the supplier.

          Thats also why everyone is trying to build into the less competitive spaces, where they could potentially moat. Tooling, certs, specialized training data, etc

        • hunter67 4 hours ago
          Our (western) economic model forces competing individual companies to be profitable quickly. China can ignore DeepSeek losing money, because they know developing DeepSeek will help China. Not every institution needs to be profitable.
          • deaux 3 minutes ago
            Ah yes, the Western economic model forcing individual American companies like Amazon , Youtube and Uber to become profitable after.. checks notes _14 years_ for Uber, 9 years for Amazon, many years for Youtube.
          • naveen99 1 hour ago
            You mean like intel, tesla, spacex, openai ?
        • FuckButtons 4 hours ago
          yes, they want to win the same way they won more or less every other economic competition in the last 30 years, scale out, drop prices and asphyxiate the competition.
        • simonjgreen 4 hours ago
          Yeah, it’s an interesting one. I think inertia and expectations at this point? I don’t think the big labs anticipated how low the model switching costs would be and how quickly their leads would be eroded (by each other and the upstarts)

          They are developing their moats with the platform tooling around it right now though. Look at Anthropic with Routines and OpenAI with Agents. Drop that capability in to a business with loose controls and suddenly you have a very sticky product with high switching costs. Meanwhile if you stick with purely the ‘chat’ use cases, even Cowork and scheduled tasks, you maintain portability.

        • tick_tock_tick 4 hours ago
          They are all racing to AGI. They aren't designing them to be interchangeable they just happen to be.
          • rglullis 4 hours ago
            No, they are not. If they were "racing to AGI" they would be working together. OpenAI would still be focused on being a non-profit. Anthropic wouldn't be blocking distillation on their models.
          • koe123 4 hours ago
            If by AGI you mean IPO, sure. I genuinely don't believe Dario nor Sam should be trusted at this point. Elon levels of overpromising and underdelivering.
            • djmips 3 hours ago
              If by AGI you mean IPO - I automatically read that in Fireship's voice. XD
        • peepee1982 5 hours ago
          If you want other people to know whether you're being genuine or sarcastic, you'll have to put a bit more effort into your comments. Your comment just adds noise.
    • Alifatisk 5 hours ago
      You might enjoy Z.ais api docs aswell
    • kubb 4 hours ago
      Western orgs have been captured by Silicon Valley style patrimonialism, and aren’t based on merit anymore.
  • orbital-decay 5 hours ago
    >we implement end-to-end, bitwise batch-invariant, and deterministic kernels with minimal performance overhead

    Pretty cool, I think they're the first to guarantee determinism with the fixed seed or at the temperature 0. Google came close but never guaranteed it AFAIK. DeepSeek show their roots - it may not strictly be a SotA model, but there's a ton of low-level optimizations nobody else pays attention to.

  • xingyi_dev 3 hours ago
    Deepseek v4 is basically that quiet kid in the back of the class who never says a word but casually ruins the grading curve for everyone else on the final exam.
  • chenzhekl 3 hours ago
    It's interesting that they mentioned in the release notes:

    "Limited by the capacity of high-end computational resources, the current throughput of the Pro model remains constrained. We expect its pricing to decrease significantly once the Ascend 950 has been deployed into production."

    https://api-docs.deepseek.com/zh-cn/news/news260424#api-%E8%...

  • revolvingthrow 6 hours ago
    > pricing "Pro" $3.48 / 1M output tokens vs $4.40

    I’d like somebody to explain to me how the endless comments of "bleeding edge labs are subsidizing the inference at an insane rate" make sense in light of a humongous model like v4 pro being $4 per 1M. I’d bet even the subscriptions are profitable, much less the API prices.

    edit: $1.74/M input $3.48/M output on OpenRouter

    • schneehertz 5 hours ago
      This price is high even because of the current shortage of inference cards available to DeepSeek; they claimed in their press release that once the Ascend 950 computing cards are launched in the second half of the year, the price of the Pro version will drop significantly
      • Bombthecat 5 hours ago
        In six month deepseek won't be sota anymore und usage will be wayyyy down.
        • 2ndorderthought 1 hour ago
          A huge proportion of those scores are gamed anyways. Use whatever works for you at the price and availability you can afford
        • randomgermanguy 2 hours ago
          Only comparing on SOTA scores (ignoring price etc.) is like choosing your daily-driver by looking at who makes the fastest sports-car...
          • LinXitoW 1 hour ago
            The constant improvements of SOTA are the main thing keeping the investment machine running. We can't really remove training costs from inference costs, because a bunch of the funding and loans for the inference hardware only exists because the promises the continuous training (tries to) provides.
          • dnnddidiej 2 hours ago
            Not really. SOTA vs non SOTA is "can I get my coding work actually done today" vs. "this can do customer support chat"

            It is like car vs. kick scooter.

            • randomgermanguy 10 minutes ago
              > "can I get my coding work actually done today" vs. "this can do customer support chat"

              I think you need to define "can get coding work done" for this to make sense. Ive been using GPT-3 back-then for basic scripts, does that count ? Or only Claude-Code ?

              I also think this is a false dichotomy, if you look at the Project Vend project or Vending-Bench, customer support etc. is at no means trivial. (Old but great story https://www.businessinsider.com/car-dealership-chevrolet-cha...)

            • regularfry 1 hour ago
              It really isn't. We get coding work actually done today on Opus 4.5. That's not SOTA any more, and anything proximate to that level, even quite loosely, is genuinely useful.
              • dnnddidiej 57 minutes ago
                OK we are in Opus 4.5 is not SOTA. Right by that definition .... yes you are right.
                • randomgermanguy 17 minutes ago
                  I mean its almost halve a year, i think that counts ?
        • Palmik 2 hours ago
          Or there will be DSv4.1/2/3 ;)
          • randomgermanguy 8 minutes ago
            Definitely something in this realm, they call the models "preview" at a bunch of different points in the paper.

            What im really hoping is for a double-punch like with V3 -> R1

        • man4 4 hours ago
          [dead]
        • Barbing 4 hours ago
          Well, if they distilled once…
    • menzoic 5 hours ago
      API prices may be profitable. Subscriptions may still be subsidized for power users. Free tiers almost certainly are. And frontier labs may be subsidizing overall business growth, training, product features, and peak capacity, even if a normal metered API call is profitable on marginal inference.
      • dannyw 4 hours ago
        Research and training costs have to be amortized from somewhere; and labs are always training. I'm definitely keen for the financials when the two files for IPO though, it would be interesting to see; although I'm sure it won't be broken down much.
    • adam_patarino 18 minutes ago
      Prices are not just hard cost of inference. Training costs are not equal. Chinese labs have cheaper access to large data centers. I also suspect they operate far more efficiently than orgs like openAI.
    • m00x 5 hours ago
      They are profitable to opex costs, but not capex costs with the current depreciation schedules, though those are now edging higher than expected.
      • nl 3 hours ago
        Amazingly, the current depreciation overestimates the retained value of GPUs.

        In 2023, the depreciation schedule for H100s was 2 years, but they are still oversubscribed and generating signficant income.

        Coreweve has upped their depreciation for GPUs to 6 years(!) now, which seems more realistic.

        https://www.silicondata.com/blog/h100-rental-price-over-time

    • LinXitoW 1 hour ago
      They got loans to buy inference hardware on the promise of potential AGI, or at least something approaching ASI, all leading to stupid amounts of profit for those investors.

      We therefore cannot just look at inference costs directly, training is part of the pitch. Without the promises of continuous improvement and chasing the elusive AGI, money for investments for inference evaporates.

    • amunozo 5 hours ago
      I was thinking the same. How can it be than other providers can offer third-party open source models with roughly the similar quality like this, Kimi K2.6 or GLM 5.1 for 10 times less the price? How can it be that GPT 5.5 is suddenly twice the price as GPT 5.4 while being faster? I don't believe that it's a bigger, more expensive model to run, it's just they're starting to raise up the prices because they can and their product is good (which is honest as long as they're transparent with it). Honestly the movement about subscription costing the company 20 times more than we're paying is just a PR movement to justify the price hike.
      • peepee1982 5 hours ago
        I'm pretty sure OpenAI and Anthropic are overpricing their token billed API usage mainly as an incentive to commit to get their subscriptions instead.
        • simonjgreen 4 hours ago
          Anthropic recently dropped all inclusive use from new enterprise subscriptions, your seat sub gets you a seat with no usage. All usage is then charged at API rates. It’s like a worst of both worlds!
          • peepee1982 4 hours ago
            What's the point then? Special conditions for data retention/non-training policies?
            • simonjgreen 4 hours ago
              SSO Tax is a large part of it, controls around plug-in marketplace, enforcement of config, observeability of spend. But it’s all pretty weak really for $20 a month.

              And Microsoft are going the same route to moving Copilot Cowork over to a utilisation based billing model which is very unusual for their per seat products (I’m actually not sure I can ever remember that happening).

        • weird-eye-issue 4 hours ago
          The target audience for the APIs is third party apps which are not compatible with the subscriptions.
    • raincole 5 hours ago
      Insert always has been meme.

      But seriously, it just stems from the fact some people want AI to go away. If you set your conclusion first, you can very easily derive any premise. AI must go away -> AI must be a bad business -> AI must be losing money.

      • louiereederson 11 minutes ago
        It is possible to question the sustainability of the AI buildout and not have a dogmatic position on AI development.

        There are still major unanswered questions here. For instance, all of the incremental data capacity build out is going to businesses that have totally unknown LT unit economics and that today are burning obscene amounts of cash.

      • zarzavat 5 hours ago
        Before the AI bubble that will burst any time now, there was the AI winter that would magically arrive before the models got good enough to rival humans.
    • mirzap 6 hours ago
      My thoughts exactly. I also believe that subscription services are profitable, and the talk about subsidies is just a way to extract higher profit margins from the API prices businesses pay.
      • Bombthecat 5 hours ago
        Google stated a while back, that with tpus they are able to sell at cost / with profit.

        Aka: everyone who uses Nvidia isn't selling at cost, because Nvidia is so expensive.

    • crazylogger 4 hours ago
      I haven't seen anyone claiming that API prices are subsidized.

      At some point (from the very beginning till ~2025Q4) Claude Code's usage limit was so generous that you can get roughly $10~20 (API-price-equivalent) worth of usage out of a $20/mo Pro plan each day (2 * 5h window) - and for good reason, because LLM agentic coding is extremely token-heavy, people simply wouldn't return to Claude Code for the second time if provided usage wasn't generous or every prompt costs you $1. And then Codex started trying to poach Claude Code users by offering even greater limits and constantly resetting everyone's limit in recent months. The API price would have to be 30x operating cost to make this not a subsidy. That would be an extraordinary claim.

      • nl 3 hours ago
        The claim that APIs are subsidized is very common.

        eg:

        Token prices are significantly subsidized and anyone that does any serious work with AI can tell you this.

        https://news.ycombinator.com/item?id=47684887

        (the claims don't make any sense, but they are widely held)

        • vessenes 2 hours ago
          I’ll note that it’s common and dangerous, in that there’s a generation of engineers who are at risk of leading each-other astray as to the economics and therefore probability distribution of outcomes for some firms that will massively impact their careers.

          I think I understand the major reasons for this meme, but I find it really worrying; there were lots of incorrect ‘it’s a bubble’ conversations here in 2012-2015, but I don’t think they had the pervasive nature and “obvious” conclusion that a whole generation of engineering talent should just, you know, leave.

          Meanwhile I am hearing rational economic modeling from the companies selling inference; Jensen, (a polished promoter, I grant you) says it really well — token value is increasing radically, in that new models -> better quality, and therefore revenues and utilization are increasing, and therefore contrary to the popular financial and techbro modeling of 2023, things like A100s still cost quite a lot whether hourly or to purchase. (!) Basically the economic value is so strong that it has actually radically extended the life of hardware.

          I just hate to imagine like half of the world’s (or US’s) engineering talent quitting, spending ten years afraid, or wrongly convinced of some ‘inevitable’ market outcome. Feels like it will be bad for people’s personal lives, and bad for progress simultaneously.

      • dannyw 3 hours ago
        Yeah, subscriptions used to be extraordinarily generous. I miss those days, but the reinvigoration of open weight models is super exciting.

        I'm still playing with the new Qwen3.6 35B and impressed, now DeepSeek v4 drops; with both base and instruction-tuned weights? There goes my weekend :P

    • vitorgrs 5 hours ago
      And they actually say the prices will be "significantly" lower in second semester when Huawei 650 chips comes in.
    • jimmydoe 5 hours ago
      They’ve also announced Pro price will further drop 2H26 once they have more HUAWEI chips.
    • masafej536 5 hours ago
      Point taken but there isnt any western providers there yet. Power is cheaper in china.
      • 3uler 5 hours ago
        These models are open and there are tons of western providers offering it at comparable rates.
      • NitpickLawyer 5 hours ago
        As this is a new arch with tons of optimisations, it'll take some time for inference engines to support it properly, and we'll see more 3rd party providers offer it. Once that settles we'll have a median price for an optimised 1.6T model, and can "guesstimate" from there what the big labs can reasonably serve for the same price. But yeah, it's been said for a while that big labs are ok on API costs. The only unknown is if subscriptions were profitable or not. They've all been reducing the limits lately it seems.
        • ithkuil 3 hours ago
          Is there evidence that frontier models at anthropic, openai or google or whatnot are not using comparable optimizations to draw down their coats and that their markup is just higher because they can?
    • Flavius 2 hours ago
      It's because investors in OpenAI/Anthropic want to get their money back in 10 months, not in 10 years.
    • casey2 3 hours ago
      It's the decades of performance doesn't matter SV/web culture. I'd be surprised if over 1% of OpenAI/Anthropic staff know how any non-toy computer system works.
    • dminik 5 hours ago
      I mean, not one "bleeding edge" lab has stated they are profitable. They don't publish financials aside from revenue. And in Anthropic's case, they fuck with pricing every week. Clearly something is wrong here.
      • npn 1 hour ago
        you know, if you don't have to pay insane salary for your top engineers, and don't have to pay billions for internet shills to control the narrative, then all of the labs will be insane profitable.
    • sekai 5 hours ago
      > I’d like somebody to explain to me how the endless comments of "bleeding edge labs are subsidizing the inference at an insane rate" make sense in light of a humongous model like v4 pro being $4 per 1M. I’d bet even the subscriptions are profitable, much less the API prices.

      One answer - Chinese Communist Party. They are being subsidized by the state.

      • lbreakjai 3 hours ago
        When China does it it's communism. When companies in the west get massive tax cuts, rebates, incentives and subsidies, that's just supporting the captains of industry.
  • fblp 8 hours ago
    There's something heartwarming about the developer docs being released before the flashy press release.
    • taurath 4 hours ago
      Their audience is people who build stuff, techs audience is enterprise CEOs and politicians, and anyone else happy to hype up all the questionably timed releases and warnings of danger, white collar irrelevence, or promises of utopian paradise right before a funding round.
    • onchainintel 8 hours ago
      Insert obligatory "this is the way" Mando scene. Indeed!
    • necovek 7 hours ago
      Where's the training data and training scripts since you are calling this open source?

      Edit: it seems "open source" was edited out of the parent comment.

      • b65e8bee43c2ed0 6 hours ago
        doesn't it get tiring after a while? using the same (perceived) gotcha, over and over again, for three years now?

        no one is ever going to release their training data because it contains every copyrighted work in existence. everyone, even the hecking-wholesome safety-first Anthropic, is using copyrighted data without permission to train their models. there you go.

        • necovek 6 hours ago
          There is an easy fix already in widespread use: "open weights".

          It is very much a valuable thing already, no need to taint it with wrong promise.

          Though I disagree about being used if it was indeed open source: I might not do it inside my home lab today, but at least Qwen and DeepSeek would use and build on what eg. Facebook was doing with Llama, and they might be pushing the open weights model frontier forward faster.

          • JumpCrisscross 2 hours ago
            > There is an easy fix already in widespread use: "open weights"

            They're both correct given how the terms are actually used. We just have to deduce what's meant from context.

            There was a moment, around when Llama was first being released, when the semantics hadn't yet set. The nutter wing of the FOSS community, to my memory, put forward a hard-line and unworkable definition of open source and seemed to reject open weights, too. So the definition got punted to the closest thing at hand, which was open weights with limited (unfortunately, not no) use restrictions. At this point, it's a personal preference that's at most polite to respect if you know your audience has one.

          • dannyw 3 hours ago
            Yeah, open weights is really good, especially when base models (not just the instruction tuned) weights are released like here.
        • Tepix 5 hours ago
          Nvidia did with Nemo.
        • fragmede 6 hours ago
          it's not a gotcha but people using words in ways others don't like.
          • a96 4 hours ago
            It's not about likes, it's a flat out lie.
      • woctordho 3 hours ago
        They are exactly open source. The training data is the internet. Don't say it's on the internet. It IS the internet.

        The training scripts are in Megatron and vLLM.

      • bl4ckneon 6 hours ago
        Aww yes, let me push a couple petabytes to my git repo for everyone to download...
        • necovek 6 hours ago
          An easier thing would be to say "open weights", yes.
      • 0-_-0 6 hours ago
        Weights are the source, training data is the compiler.
        • injidup 5 hours ago
          You got it the wrong way round. It's more akin to.

          1. Training data is the source. 2. Training is compilation/compression. 3. Weights are the compiled source akin to optimized assembly.

          However it's an imperfect analogy on so many levels. Nitpick away.

  • dizhn 1 hour ago
    I like deepseek. It works very well. I haven't tried v4 yet but on their web chat interface, just typing "Taiwan" causes it to give you a lecture about how Taiwan is part of China. :)
    • jyscao 43 minutes ago
      What a gotcha
  • gbnwl 8 hours ago
    I’m deeply interested and invested in the field but I could really use a support group for people burnt out from trying to keep up with everything. I feel like we’ve already long since passed the point where we need AI to help us keep up with advancements in AI.
    • satvikpendem 6 hours ago
      Don't keep up. Much like with news, you'll know when you need to know, because someone else will tell you first.
      • vessenes 2 hours ago
        This is only good advice if you don’t have the need to understand what’s happening on the edge of the frontier. If you do, then you’ll lose on compounding the knowledge from staying engaged with the major developments.
    • wordpad 8 hours ago
      The players barely ever change. People don't have problems following sports, you shouldn't struggle so much with this once you accept top spot changes.
      • ehnto 7 hours ago
        It is funny seeing people ping pong between Anthropic and ChatGPT, with similar rhetoric in both directions.

        At this point I would just pick the one who's "ethics" and user experience you prefer. The difference in performance between these releases has had no impact on the meaningful work one can do with them, unless perhaps they are on the fringes in some domain.

        Personally I am trying out the open models cloud hosted, since I am not interested in being rug pulled by the big two providers. They have come a long way, and for all the work I actually trust to an LLM they seem to be sufficient.

        • DiscourseFan 7 hours ago
          I find ChatGPT annoying mostly
          • awakeasleep 7 hours ago
            Open settings > personalization. Set it to efficient base style. Turn off enthusiasm and warmth. You’re welcome
            • 2ndorderthought 1 hour ago
              Yea but even then it's still annoying. "It's not about the enthusiasm and warmth but the general tone"
      • gbnwl 7 hours ago
        I didn't express this well but my interest isn't "who is in the top spot", and is more _why and _how various labs get the results they do. This is also magnified by the fact that I'm not only interested in hosted providers of inference but local models as well. What's your take on the best model to run for coding on 24GB of VRAM locally after the last few weeks of releases? Which harness do you prefer? What quants do you think are best? To use your sports metaphor it's more than following the national leagues but also following college and even high school leagues as well. And the real interest isn't even who's doing well but WHY, at each level.
    • dnnddidiej 2 hours ago
    • vrganj 6 hours ago
      It honestly has all kinda felt like more of the same ever since maybe GPT4?

      New model comes out, has some nice benchmarks, but the subjective experience of actually using it stays the same. Nothing's really blown my mind since.

      Feels like the field has stagnated to a point where only the enthusiasts care.

      • ifwinterco 5 hours ago
        For coding Opus 4.5 in q3 2025 was still the best model I've used.

        Since then it's just been a cycle of the old model being progressively lobotomised and a "new" one coming out that if you're lucky might be as good as the OG Opus 4.5 for a couple of weeks.

        Subjective but as far as I can tell no progress in almost a year, which is a lifetime in 2022-25 LLM timelines

    • trueno 6 hours ago
      holy shit im right there with you
  • sergiopreira 50 minutes ago
    DeepSeek is commoditizing frontier capability... Opus 4.6-level benchmarks at a fraction of the cost changes also who can access these tools.

    Stuff that was prohibitive six months ago is now up for grabs. We keep on working on the infra level now, swithcing models whenever we run out of credits, or want a different result. The question is how do we build context, architecture and ensure the agent is effective and efficient..... wouldn't it be good if we simply used less energy to make these AI calls?

  • sho 5 hours ago
    So, this is the version that's able to serve inference from Huawei chips, although it was still trained on nVidia. So unless I'm very much mistaken this is the biggest and best model yet served on (sort of) readily-available chinese-native tech. Performance and stability will be interesting to see; openrouter currently saying about 1.12s and 30tps, which isn't wonderful but it's day one after all.

    For reference, the huawei Ascend 950 that this thing runs on is supposed to be roughly comparable to nVidia's H100 from 2022. In other words, things are hotting up in the GPU war!

    • alpineman 4 hours ago
      Can't see how NVIDA justifies its valuation/forward P/E ratio with these developments and on-device also becoming viable for 98% of people's needs when it comes to AI
      • aurareturn 4 hours ago
        On-device is incredibly far away from being viable. A $20 ChatGPT subscription beats the hell out of the 8B model that a $1,000 computer can run.

        Nvidia's forward PE ratio is only 20 for 2026. That's much lower than companies like Walmart and Costco. It's also growing nearly 100% YoY and has a $1 trillion backlog.

        I think Nvidia is cheap.

        • 2ndorderthought 59 minutes ago
          8b models can run on laptops. Of course a 1.8T model is more capable, but for a lot of tasks it really isn't 1000x
        • midwain 2 hours ago
          This is an assessment of the moment. When rate of AI data center construction slows down, then P/E will start to grow. Or are we saying that the pace will only grow forever? There are already signs of a slowdown in construction.
        • littlestymaar 1 hour ago
          > On-device is incredibly far away from being viable. A $20 ChatGPT subscription beats the hell out of the 8B model that a $1,000 computer can run.

          That's a very strange comment. Why would anyone run a dense model on a low-end computer? A 8B model is only going to make sense if you have a dGPU. And a Qwen3.6 or Gemma4 MoE aren't going to be “beaten the hell out” for most tasks especially if you use tools.

          Finally, over the lifetime of your computer, your ChatGPT subscription is going to cost more than the cost of your reference computer! So the real question should be whether you're better off with a $1000 computer and a ChatGPT subscription or with a $2000 computer (assuming a conservative lifetime of 4 years for the computer).

          My Strix Halo desktop (which I paid ~1700€ before OpenAI derailed the RAM market) paired with Qwen3.5 is a close replacement for a $200/month subscription, so the cost/benefit ratio is strongly in favor of the local model in my use case.

          The complexity of following model releases and installing things needed for self-hosting is a valid argument against local models, but it's absolutely not the same thing as saying that local models are too bad to use (which is complete BS).

        • dannyw 3 hours ago
          I do think Nvidia isn't that badly priced; they still have the dominance in training and the proven execution

          Biggest risk I see is Nvidia having delays / bad luck with R&D / meh generations for long enough to depress their growth projections; and then everything gets revalued.

    • npodbielski 4 hours ago
      Great! Can't wait to buy decent GPU for interference for <1k$
  • primaprashant 5 hours ago
    While SWE-bench Verified is not a perfect benchmark for coding, AFAIK, this is the first open-weights model that has crossed the threshold of 80% score on this by scoring 80.6%.

    Back in Nov 2025, Opus 4.5 (80.9%) was the first proprietary model to do so.

  • yanis_t 7 hours ago
    Already on Openrouter. Pro version is $1.74/m/input, $3.48/m/output, while flash $0.14/m/input, 0.28/m/output.
    • nl 3 hours ago
      The Pro model is giving 429 Overload errors
      • XCSme 14 minutes ago
        Yup, can't really be used in production atm.
    • astrod 7 hours ago
      Getting 'Api Error' here :( Every other model is working fine.
      • poglet 6 hours ago
        Try interacting with it through the website, it will give an error and some explanation on the issue. I had to relax my guardrail settings.
    • esafak 7 hours ago
      • 77ko 7 hours ago
        Its on OR - but currently not available on their anthropic endpoint. OR if you read this, pls enable it there! I am using kimi-2.6 with Claude Code, works well, but Deepseek V4 gives an error:

        `https://openrouter.ai/api/messages with model=deepseek/deepseek-v4-pro, OR returns an error because their Anthropic-compat translator doesn't cover V4 yet. The Claude CLI dutifully surfaces that error as "model...does not exist"

  • seanobannon 8 hours ago
  • amunozo 5 hours ago
    For those who rely on open source models but don't want to stop using frontier models, how do you manage it? Do you pay any of the Chinese subscription plans? Do you pay the API directly? After GPT 5.5 release, however good it is, I am a bit tired of this price hiking and reduced quota every week. I am now unemployed and cannot afford more expensive plans for the moment.
    • regularfry 57 minutes ago
      I've been on Kimi K2.5 on openrouter for a couple of months for anything I can't run locally. Really is dirt cheap for how good it is. Haven't assessed K2.6 yet but the price is higher so it needs to be more efficient, not just more capable.

      But more broadly: openrouter solves the problem of making a broad range of models available with a single payment endpoint, so you can just switch around as much as you like.

    • solarkraft 1 hour ago
      At home I currently use MiniMax via OpenRouter - it’s pretty good and very cheap. They have a subscription plan, but I’m not ready to commit to it yet.

      Another way to keep the ability to try out new models is to buy a reseller subscription like Cursor’s.

      • amunozo 57 minutes ago
        I tried OpenRouter but I feel the money flies even with these models, it is not comparable to a subscription but yes, it's very good for trying. Maybe I should test other models alongside GPT 5.5 to see which one fits me.
    • azuanrb 3 hours ago
      I have $20 ChatGPT subscription. Stopped Anthropic $20 subscription since the limit ran out too fast. That's my frontier model(s).

      For OSS model, I have z.ai yearly subscription during the promo. But it's a lot more expensive now. The model is good imo, and just need to find the right providers. There are a lot of alternatives now. Like I saw some good reviews regarding ollama cloud.

      • amunozo 1 hour ago
        I am thinking about getting some 1 year promotion as a student before defending my PhD.
    • the_gipsy 3 hours ago
      Have you considered... not subscribing? You can ask the top models via chats for specific stuff, and then set up some free CLI like mistral.

      If you're trying to make a buck while unemployed, sure get a subscription. Otherwise learn how to work again without AI, just focus on the interesting stuff.

      • amunozo 3 hours ago
        I just want to try to make something useful out of my time, that's why I'm subscribed to Codex at the moment. 20€ is affordable, not really a problem. But yes, maybe I would do me a favor unsubscribing and going back to the old ways to learn properly.
        • the_gipsy 3 hours ago
          I'm "working" on some open source stuff with minimal AI. But I will probably cave in at some point and get a subscription again, the moment I need to spin up a mountain of garbage, fast.
    • cmrdporcupine 1 hour ago
      For DeepSeek you can use their API and if you ran it constantly you'd still be under what OpenAI or Anthropic charge for a coding plan.
      • anentropic 1 hour ago
        I had Claude make me a quick tool to combine my Claude Code token usage (via ccusage util) with OpenRouter pricing from the models API

        I'm on Max x5 plan and any of the 'good' models like Kimi 2.6, GLM, DeepSeek would have cost 3-5x in per-token billing for what I used on my Claude plan the last three months

        So unless my Claude fudged the maths to make itself look better, seems like I'm getting a good deal

      • amunozo 56 minutes ago
        I am not so sure, credits fly when using any model trough API if I use it as much as I use Codex.
  • mchusma 7 hours ago
    For comparison on openrouter DeepSeek v4 Flash is slightly cheaper than Gemma 4 31b, more expensive than Gemma 4 26b, but it does support prompt caching, which means for some applications it will be the cheapest. Excited to see how it compares with Gemma 4.
    • MillionOClock 5 hours ago
      I wonder why there aren't more open weights model with support for prompt caching on OpenRouter.
      • mzl 4 hours ago
        It is tricky to build good infrastructure for prompt caching.
  • sidcool 7 hours ago
    Truly open source coming from China. This is heartwarming. I know if the potential ulterior motives.
    • b65e8bee43c2ed0 6 hours ago
      American companies want a scan of your asshole for the privilege of paying to access their models, and unapologetically admit to storing, analyzing, training on, and freely giving your data to any authorities if requested. Chinese ulteriority is hypothetical, American is blatant.
      • elefanten 5 hours ago
        It’s not remotely hypothetical you’d have to be living under a rock to believe that. And the fusion with a one-party state government that doesn’t tolerate huge swathes of thoughtspace being freely discussed is completely streamlined, not mediated by any guardrails or accountability.

        This “no harm to me” meme about a foreign totalitarian government (with plenty of incentive to run influence ops on foreigners) hoovering your data is just so mind-bogglingly naive.

        • ben_w 5 hours ago
          As a non-American, everything you wrote other than "one party" applies to the current US regime.

          Relatively speaking, DeepSeek is less untrustworthy than Grok.

          When I try ChatGPT on current events from the White House it interprets them as strange hypotheticals rather than news, which is probably more a problem with DC than with GPT, but whatever.

        • oceanplexian 5 hours ago
          > And the fusion with a one-party state government that doesn’t tolerate huge swathes of thoughtspace being freely discussed

          That would be a great argument if the American models weren’t so heavily censored.

          The Chinese model might dodge a question if I ask it about 1-2 specific Chinese cultural issues but then it also doesn’t moralize me at every turn because I asked it to use a piece of security software.

          • donbreo 3 hours ago
            Just ask it to "name the states in india" or "what happened in 1989"
        • randomNumber7 5 hours ago
          The USA has one of the highest percentages of their population in prison.

          Even for minor stuff like beeing addicted to drugs.

          Looks pretty totalitarian to me.

          • thesmtsolver2 4 hours ago
            Do you really trust China’s stats on prison population?

            Note: you can have this conversation criticizing the US on a US website. Try criticizing Xi or the CCP or calling him Pooh on a Chinese website.

            You think China doesn’t imprison drug users?

            China recently executed a low level drug trafficker

            https://www.lemonde.fr/en/international/article/2026/04/05/c...

            China is one of the top executioners. China executes more than rest of the world combined

            https://www.amnesty.org/en/latest/news/2017/04/china-must-co...

            You think China is honest about political prisoners in Tibet and Xinjiang?

            Criticize the US all you want but I can’t understand the whitewashing of a real totalitarian and genocidal state like mainland China.

            • randomNumber7 1 hour ago
              Both can be totalitarian. Both are shit imho. I just don't buy the argument that China is worse because of it.

              But if we start nitpicking the US also executes people all over the world without trial and has secret prisons worldwide where they put people (guess what) without trial.

            • chronc6393 2 hours ago
              mic drop
          • bdamm 4 hours ago
            And in China the state can harvest your organs for political crimes or even just being the wrong religion.

            Not quite the same.

          • FuckButtons 4 hours ago
            I’ll be sure to pick up my copy of the peoples daily to read about those statistics in the morning.
        • b65e8bee43c2ed0 5 hours ago
          >This “no harm to me” meme about a foreign totalitarian government (with plenty of incentive to run influence ops on foreigners) hoovering your data is just so mind-bogglingly naive.

          yes, this is exactly what I'm saying.

        • danny_codes 5 hours ago
          It’s an open model? So you can run it yourself if you want to
        • theshackleford 5 hours ago
          > This “no harm to me” meme about a foreign totalitarian government (with plenty of incentive to run influence ops on foreigners) hoovering your data is just so mind-bogglingly naive.

          This is why I’ve been urging everyone I know to move away from American based services and providers. It’s slow but honest work.

        • michaelt 4 hours ago
          The oppression of people in China like Uyghurs and Hong Kong, the complete lack of free speech, the saber-rattling at neighbours, and the lack of respect for intellectual property are indeed all well documented.

          But for folks on the opposite side of the world, the threats are more like "they're selling us electric cars and solar panels too cheaply" and the hypothetical "these super cheap CCTV cameras could be used for remote spying"

        • casey2 3 hours ago
          Thousands of years with no invasions, hundreds of years with thousands of invasions.

          China is a nation built for peace, while western nations are built for war.

          • niek_pas 2 hours ago
            Hong Kong? Taiwan? Uyghurs? Tiananmen Square? Tibet?
            • varrakesh 2 hours ago
              China hasn't done anything with Taiwan other than saber-rattling. Hong Kong, Xinjiang, etc. are all part of China.

              The US is (mostly) protective of its citizens but (depending on administration) varyingly hostile to outsiders (immigrants, starting wars, etc.).

              China is suppressive towards its own citizens, but has been largely peaceful with other countries and immigrants/visitors. (Granted, China has way fewer immigrants than the US, so this is not comparable).

          • resonancel 40 minutes ago
            I believe China only got this huge because all its neighours couldn't help joining the peaceful middle realm \s
        • t0lo 5 hours ago
          And you're saying Americans aren't banned from criticising their elites?
          • resonancel 36 minutes ago
            Come back when Americans are routinely jailed for rubbing their elites the wrong way (in some countries, criticisms aren't the only way to rub the leaders the wrong way)
          • rhubarbtree 4 hours ago
            Donald trump is a terrible president and looks like Winnie the Pooh. Keir Starmer is useless and a liar.

            Feel free to go post similar on Chinese social media about their leaders.

          • tommica 5 hours ago
            Pretty sure you guys have a strong laws about free-speech, and criticizing elites is part of that. Though there are some groups that do not really want the 1st amendment to be a thing.
            • ben_w 5 hours ago
              > Though there are some groups that do not really want the 1st amendment to be a thing.

              The executive branch?

              • tommica 5 hours ago
                That would be a naïve perspective.
                • mjamesaustin 4 hours ago
                  Foreigners are literally being denied entry into the country due to opposing viewpoints expressed on social media. People have to disable FaceID on their phones prior to going through customs in case an agent decides to investigate whether their political views are in opposition to the current administration.
          • xienze 3 hours ago
            > And you're saying Americans aren't banned from criticising their elites?

            Half the country would be locked up right now if they weren’t allowed to criticize Trump. Have you even paid attention to how much he’s shitted on, on a daily basis?

      • thesmtsolver2 4 hours ago
        As someone with Tibetan friends and as someone from India, Chinese ulterior motives are way more clear.
        • mordae 4 hours ago
          Same as USA. Happy to see some competition.
    • Quothling 5 hours ago
      It's a little sad that tech now comes down to geopolitics, but if you're not in the USA then what is the difference? I'm Danish, would I rather give my data to China or to a country which recently threatened the kingdom I live in with military invasion? Ideally I'd give them to Mistral, but in reality we're probably going to continue building multi-model tools to make sure we share our data with everyone equally.
    • spaceman_2020 5 hours ago
      I don’t care about whatever “ulterior motives” they might have

      My country’s per capita income is $2500 a year. We can’t pay perpetual rent to OAI/Anthropic

    • try-working 7 hours ago
      if you want to understand why labs open source their models: http://try.works/why-chinese-ai-labs-went-open-and-will-rema...
      • wraptile 5 hours ago
        > Internet comments say that open sourcing is a national strategy, a loss maker subsidized by the government. On the contrary, it is a commercial strategy and the best strategy available in this industry.

        This sounds whole lot like potatoh potahto. I think the former argument is very much the correct one: China can undercut everyone and win, even at a loss. Happened with solar panels, steel, evs, sea food - it's a well tested strategy and it works really well despite the many flavors it comes in.

        That being said a job well done for the wrong reasons is still a job well done so we should very much welcome these contributions, and maybe it's good to upset western big tech a bit so it's remains competitive.

        • try-working 5 hours ago
          It is not only that Chinese labs can undercut on price. It is that they must. They must give away their models for free by open sourcing them, and they must even give away free inference services for people to try them. That is the point of the post.
          • FuckButtons 4 hours ago
            There is not ‘must’ here, they did not ‘have’ to undercut every other strategically and technologically important industry the rest of the world has, but they did as a point of national policy.
            • vessenes 2 hours ago
              ‘Have to’ and ‘every other’ are both doing so much work here that I think your worldview on this is likely just incorrect.

              The decisions to mobilize a large rural base toward manufacturing and the central bank goals to keep the yuan cheap as a critical support of this project were absolutely national.

              They were ultimately about bringing (or trying to bring) one of the most populous nations in the world out of extreme poverty; in particular the people of the country out of extreme poverty.

              There are different policies in place today, and, crucially, bleeding edge tech is not gainful labor employment —- BYD has some factories with roughly 2 employees per acre of robotic production, for instance. Or datacenters where the revenue could scale but the labor will not.

              So, these are different times, different goals, different political and labor outcomes. Reasoning about what China “must do”, or has as a matter of “national policy” should start with a clear look at history and circumstance, or you’re likely to read things incorrectly.

            • try-working 1 hour ago
              No. Read what I wrote. I have spent a decade in the Chinese tech industry.
            • Danox 3 hours ago
              American industry has been on a downward spiral since the early 1960s….
              • FuckButtons 3 hours ago
                I’m not claiming it hasn’t been, but if you would look around, it’s not just the USA this has impacted.
    • I_am_tiberius 7 hours ago
      Open weight!
      • alecco 6 hours ago
        Please don't slander the most open AI company in the world. Even more open than some non-profit labs from universities. DeepSeek is famous for publishing everything. They might take a bit to publish source code but it's almost always there. And their papers are extremely pro-social to help the broader open AI community. This is why they struggle getting funded because investors hate openness. And in China they struggle against the political and hiring power of the big tech companies.

        Just this week they published a serious foundational library for LLMs https://github.com/deepseek-ai/TileKernels

        Others worth mentioning:

        https://github.com/deepseek-ai/DeepGEMM a competitive foundational library

        https://github.com/deepseek-ai/Engram

        https://github.com/deepseek-ai/DeepSeek-V3

        https://github.com/deepseek-ai/DeepSeek-R1

        https://github.com/deepseek-ai/DeepSeek-OCR-2

        They have 33 repos and counting: https://github.com/orgs/deepseek-ai/repositories?type=all

        And DeepSeek often has very cool new approaches to AI copied by the rest. Many others copied their tech. And some of those have 10x or 100x the GPU training budget and that's their moat to stay competitive.

        The models from Chinese Big Tech and some of the small ones are open weights only. (and allegedly benchmaxxed) (see https://xcancel.com/N8Programs/status/2044408755790508113). Not the same.

        • patshead 6 hours ago
          DeepSeek's models are indeed open weight. Why do you feel that pointing this out would be considered slander?
          • culi 3 hours ago
            I think they were reading GP's comment as a correction. Like "not open-source, just open weight". I'm not sure if their reading was accurate but I enjoyed their high effort comment nonetheless
        • kortilla 4 hours ago
          It’s not slander to say something true. These are open weights, not open source. They don’t provide the training data or the methodology requires to reproduce these weights.

          So you can’t see what facts are pruned out, what biases were applied, etc. Even more importantly, you can’t make a slightly improved version.

          This model is as open source as a windows XP installation ISO.

          • alecco 4 hours ago
            > These are open weights, not open source.

            Did you even read my comment?

      • 0-_-0 6 hours ago
        Weights are the source, training data is the compiler
        • crazylogger 5 hours ago
          Training data == source code, training algorithm == compiler, model weights == compiled binary.
          • 0-_-0 5 hours ago
            Training algorithm is the programmer, weights are the code that you run in an interpreter
        • ngruhn 5 hours ago
          isn't it more like the data is the source, the training process is the compiler, and the weights are the binary output.
    • zerr 4 hours ago
      Do they also open-source censoring filter rules? Like, you can't ask what happened at Tiananmen Square in 1989.
    • harladsinsteden 4 hours ago
      > I know if the potential ulterior motives.

      And you think the US tech giants don't have any ulterior motives?!

      • FuckButtons 4 hours ago
        I think their motives are pretty transparent, as are china’s, as ever, you have to pick the lesser of two evils.
  • yanis_t 1 hour ago
    Assuming it is almost as good as Opus 4.6 (which benchmarks seem to give evidence for), and assuming we are having a good enough harness (PI, OpenCode), it's is now more than 5x cheaper.

    I just want to remind you that this is happening at the same time as Anthropic A/B tests removal of Code from Pro Plan, and as OpenAI releases gpt-5.5 2x more expensive than gpt-5.4...

    • stingraycharles 1 hour ago
      > Assuming it is almost as good as Opus 4.6 (which benchmarks seem to give evidence for)

      That’s a big if. It’s my experience that models that perform very well on benchmarks do not necessarily perform well in real life.

      I’ve mostly started ignoring the benchmarks and run my own evals.

  • nthypes 8 hours ago
    https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

    Model was released and it's amazing. Frontier level (better than Opus 4.6) at a fraction of the cost.

    • 0xbadcafebee 7 hours ago
      I don't think we need to compare models to Opus anymore. Opus users don't care about other models, as they're convinced Opus will be better forever. And non-Opus users don't want the expense, lock-in or limits.

      As a non-Opus user, I'll continue to use the cheapest fastest models that get my job done, which (for me anyway) is still MiniMax M2.5. I occasionally try a newer, more expensive model, and I get the same results. I have a feeling we might all be getting swindled by the whole AI industry with benchmarks that just make it look like everything's improving.

      • versteegen 6 hours ago
        Which model's best depends on how you use it. There's a huge difference in behaviour between Claude and GPT and other models which makes some poor substitutes for others in certain use cases. I think the GPT models are a bad substitute for Claude ones for tasks such as pair-programming (where you want to see the CoT and have immediate responses) and writing code that you actually want to read and edit yourself, as opposed to just letting GPT run in the background to produce working code that you won't inspect. Yes, GPT 5.4 is cheap and brilliant but very black-box and often very slow IME. GPT-5.4 still seems to behave the same as 5.1, which includes problems like: doesn't show useful thoughts, can think for half an hour, says "Preparing the patch now" then thinks for another 20 min, gives no impression of what it's doing, reads microscopic parts of source files and misses context, will do anything to pass the tests including patching libraries...
      • ind-igo 6 hours ago
        Agree with your assessment, I think after models reached around Opus 4.5 level, its been almost indistinguishable for most tasks. Intelligence has been commoditized, what's important now is the workflows, prompting, and context management. And that is unique to each model.
        • vidarh 5 hours ago
          Same for me. There are tasks when I want the smartest model. But for a whole lot of tasks I now default to Sonnet, or go with cheaper models like GLM, Kimi, Qwen. DeepSeek hasn't been in the mix for a while because their previous model had started lagging, but will definitely test this one again.

          The tricky part is that the "number of tokens to good result" does absolutely vary, and you need a decent harness to make it work without too much manual intervention, so figuring out which model is most cost-effective for which tasks is becoming increasingly hard, but several are cost-effective enough.

        • wuschel 5 hours ago
          This is not true for some cases e.g. there are stark differences in the correctness of answers in certain type of case work.
      • sandos 6 hours ago
        Is Opus nerfed somehow in Copilot? Ive tried it numerous times, it has never reallt woved me. They seem to have awfully small context windows, but still. Its mostly their reasoning which has been off

        Codex is just so much better, or the genera GPT models.

      • spaceman_2020 5 hours ago
        I found Opus 4.7 to be actually worse than Opus 4.6 for my use case

        Substantially worse at following instructions and overoptimized for maximizing token usage

      • kmarc 6 hours ago
        This resonates with me a lot.

        I do some stuff with gemini flash and Aider, but mostly because I want to avoid locking myself into a walled garden of models, UIs and company

      • post-it 6 hours ago
        What do you run these on? I've gotten comfortable with Claude but if folks are getting Opus performance for cheaper I'll switch.
        • oceanplexian 5 hours ago
          You can just use Claude Code with a few env vars, most of these providers offer an Anthropic compatible API
        • slopinthebag 6 hours ago
          Try Charm Crush first, it's a native binary. If it's unbearable, try opencode, just with the knowledge your system will probably be pwned soon since it's JS + NPM + vibe coding + some of the most insufferable devs in the industry behind that product.

          If you're feeling frisky, Zed has a decent agent harness and a very good editor.

      • sandGorgon 6 hours ago
        actually this is not the reason - the harness is significantly better. There is no comparable harness to Claude Code with skills, etc.

        Opencode was getting there, but it seems the founders lost interest. Pi could be it, but its very focused on OpenClaw. Even Codex cli doesnt have all of it.

        which harness works well with Deepseek v4 ?

        • darkwater 5 hours ago
          What's the issue with OC? I tried it a bit over 2 months ago, when I was still on Claude API, and it actually liked more that CC (i.e. the right sidebar with the plan and a tendency at asking less "security" questions that CC). Why is it so bad nowadays?
      • avereveard 6 hours ago
        eh idk. until yesterday opus was the one that got spatial reasoning right (had to do some head pose stuff, neither glm 5.1 nor codex 5.3 could "get" it) and codex 5.3 was my champion at making UX work.

        So while I agree mixed model is the way to go, opus is still my workhorse.

        • gunalx 2 hours ago
          I find gemini pretty good ob spatial reasoning.
          • avereveard 2 hours ago
            Yeah but gemini has a hard time discussing about solutions it just jump to implementation which is great if it gets it right and not so great if it goes down the wrong path.

            Not saying it is better or worse, but the way I perpersonally prefer is to design in chat, to make sure all unknown unknown are addressed

      • szundi 6 hours ago
        I don’t know what people are doing but Minimax produced 16 bugreports which of 15 was false positives (literally a mistake).

        In contrast ChatGPT 5.3 and also Opus has a 90% rate at least on this same project. (Embedded)

        All other tests were the same. What are you doing with these models?

    • onchainintel 8 hours ago
      How does it compare to Opus 4.7? I've been immersed in 4.7 all week participating in the Anthropic Opus 4.7 hackathon and it's pretty impressive even if it's ravenous from a token perspective compared to 4.6
      • greenknight 8 hours ago
        The thing is, it doesnt need to beat 4.7. it just needs to do somewhat well against it.

        This is free... as in you can download it, run it on your systems and finetune it to be the way you want it to be.

        • libraryofbabel 6 hours ago
          > you can download it, run it on your systems

          In theory, sure, but as other have pointed out you need to spend half a million on GPUs just to get enough VRAM to fit a single instance of the model. And you’d better make sure your use case makes full 24/7 use of all that rapidly-depreciating hardware you just spent all your money on, otherwise your actual cost per token will be much higher than you think.

          In practice you will get better value from just buying tokens from a third party whose business is hosting open weight models as efficiently as possible and who make full use of their hardware. Even with the small margin they charge on top you will still come out ahead.

          • oceanplexian 5 hours ago
            There are a lot of companies who would gladly drop half a million on a GPU to have private inference that Anthropic or OpenAI can’t use to steal their data.

            And that GPU wouldn’t run one instance, the models are highly parallelizable. It would likely support 10-15 users at once, if a company oversubscribed 10:1 that GPU supports ~100 seats. Amortized over a couple years the costs are competitive.

            • libraryofbabel 5 hours ago
              > There are a lot of companies who would gladly drop half a million on a GPU to have private inference that Anthropic or OpenAI can’t use to steal their data.

              Obviously, and certainly companies do run their own models because they place some value on data sovereignty for regulatory or compliance or other reasons. (Although the framing that Anthropic or OpenAI might "steal their data" is a bit alarmist - plenty of companies, including some with _highly_ sensitive data, have contracts with Anthropic or OpenAI that say they can't train future models on the data they send them and are perfectly happy to send data to Claude. You may think they're stupid to do that, but that's just your opinion.)

              > the models are highly parallelizable. It would likely support 10-15 users at once.

              Yes, I know that; I understand LLM internals pretty well. One instance of the model in the sense of one set of weights loaded across X number of GPUs; of course you can then run batch inference on those weights, up to the limits of GPU bandwidth and compute.

              But are those 100 users you have on your own GPUs usings the GPUs evenly across the 24 hours of the day, or are they only using them during 9-5 in some timezone? If so, you're leaving your expensive hardware idle for 2/3 of the day and the third party providers hosting open weight models will still beat you on costs, even without getting into other factors like they bought their GPUs cheaper than you did. Do the math if you don't believe me.

          • hsbauauvhabzb 5 hours ago
            Sure, but that’s an incredibly short term viewpoint.
        • p1esk 7 hours ago
          Do you think a lot of people have “systems” to run a 1.6T model?
          • CJefferson 6 hours ago
            To me, the important thing isn't that I can run it, it's that I can pay someone else to run it. I'm finding Opus 4.7 seems to be weirdly broken compared to 4.6, it just doesn't understand my code, breaks it whenever I ask it to do anything.

            Now, at the moment, i can still use 4.6 but eventually Anthropic are going to remove it, and when it's gone it will be gone forever. I'm planning on trying Deepseek v4, because even if it's not quite as good, I know that it will be available forever, I'll always be able to find someone to run it.

            • muyuu 44 minutes ago
              Yep, it's wild how little emphasis is there on control and replicability in these posts.

              Already these models are useful for a myriad of use cases. It's really not that important if a model can 1-shot a particular problem or draw a cuter pelican on a bike. Past a degree of quality, process and reliability are so much more important for anything other than complete hands-off usage, which in business it's not something you're really going to do.

              The fact that my tool may be gone tomorrow, and this actually has happened before, with no guarantees of a proper substitute... that's a lot more of a concern than a point extra in some benchmark.

          • applfanboysbgon 7 hours ago
            No, but businesses do. Being able to run quality LLMs without your business, or business's private information, being held at the mercy of another corp has a lot of value.
            • forrestthewoods 7 hours ago
              What type of system is needed to self host this? How much would it cost?
              • disiplus 6 hours ago
                Depends how many users you have and what is "production grade" for you but like 500k gets you a 8x B200 machine.
              • p1esk 7 hours ago
                Depends on fast you want it to be. I’m guessing a couple of $10k mac studio boxes could run it, but probably not fast enough to enjoy using it.
              • fragmede 6 hours ago
                One GB200 NVL72 from Nvidia would do it. $2-3 million, or so. If you're a corporation, say Walmart or PayPal, that's not out of the question.

                If you want to go budget corporate, 7 x H200 is just barely going to run it, but all in, $300k ought to do it.

                • gloflo 6 hours ago
                  How many users can you serve with that?
                  • fragmede 5 hours ago
                    For the H200, between 150-700. The GB200 gets you something like 2-10k users.
              • CamperBob2 5 hours ago
                $20K worth of RTX 6000 Blackwell cards should let you run the Flash version of the model.
            • choldstare 7 hours ago
              Not really - on prem llm hosting is extremely labor and capital intensive
              • applfanboysbgon 7 hours ago
                But can be, and is, done. I work for a bootstrapped startup that hosts a DeepSeek v3 retrain on our own GPUs. We are highly profitable. We're certainly not the only ones in the space, as I'm personally aware of several other startups hosting their own GLM or DeepSeek models.
                • wuschel 5 hours ago
                  Why a retrain? What are you using the model for?
        • onchainintel 7 hours ago
          Completely agree, not suggesting it needs ot just genuinely curious. Love that it can be run locally though. Open source LLMs punching back pretty hard against proprietary ones in the cloud lately in terms of performance.
        • kelseyfrog 7 hours ago
          What's the hardware cost to running it?
          • redox99 7 hours ago
            Probably like 100 USD/hour
          • bbor 7 hours ago
            I was curious, and some [intrepid soul](https://wavespeed.ai/blog/posts/deepseek-v4-gpu-vram-require...) did an analysis. Assuming you do everything perfectly and take full advantage of the model's MoE sparsity, it would take:

            - To run at full precision: "16–24 H100s", giving us ~$400-600k upfront, or $8-12/h from [us-east-1](https://intuitionlabs.ai/articles/h100-rental-prices-cloud-c...).

            - To run with "heavy quantization" (16 bits -> 8): "8xH100", giving us $200K upfront and $4/h.

            - To run truly "locally"--i.e. in a house instead of a data center--you'd need four 4090s, one of the most powerful consumer GPUs available. Even that would clock in around $15k for the cards alone and ~$0.22/h for the electricity (in the US).

            Truly an insane industry. This is a good reminder of why datacenter capex from since 2023 has eclipsed the Manhattan Project, the Apollo program, and the US interstate system combined...

            • oceanplexian 5 hours ago
              All these number are peanuts to a mid sized company. A place I worked at used to spend a couple million just for a support contract on a Netapp.

              10 years from now that hardware will be on eBay for any geek with a couple thousand dollars and enough power to run it.

            • zargon 7 hours ago
              That article is a total hallucination.

              "671B total / 37B active"

              "Full precision (BF16)"

              And they claim they ran this non-existent model on vLLM and SGLang over a month and a half ago.

              It's clickbait keyword slop filled in with V3 specs. Most of the web is slop like this now. Sigh.

          • slashdave 7 hours ago
            "if you have to ask..."
        • johnmaguire 7 hours ago
          ... if you have 800 GB of VRAM free.
          • inventor7777 7 hours ago
            I remember reading about some new frameworks have been coming out to allow Macs to stream weights of huge models live from fast SSDs and produce quality output, albeit slowly. Apart from that...good luck finding that much available VRAM haha
      • spaceman_2020 5 hours ago
        Tbh I was more productive with 4.6 than ever before and if AI progress locks in permanently at 4.6 tier, I’d be pretty happy
      • rvz 7 hours ago
        It is more than good enough and has effectively caught up with Opus 4.6 and GPT 5.4 according to the benchmarks.

        It's about 2 months behind GPT 5.5 and Opus 4.7.

        As long as it is cheap to run for the hosting providers and it is frontier level, it is a very competitive model and impressive against the others. I give it 2 years maximum for consumer hardware to run models that are 500B - 800B quantized on their machines.

        It should be obvious now why Anthropic really doesn't want you to run local models on your machine.

        • deaux 7 hours ago
          Vibes > Benchmarks. And it's all so task-specific. Gemini 3 has scored very well in benchmarks for very long but is poor at agentic usecases. A lot of people prefering Opus 4.6 to 4.7 for coding despite benchmarks, much more than I've seen before (4.5->4.6, 4->4.5).

          Doesn't mean Deepseek v4 isn't great, just benchmarks alone aren't enough to tell.

        • snovv_crash 7 hours ago
          With the ability of the Qwen3.6 27B, I think in 2 years consumers will be running models of this capability on current hardware.
        • colordrops 7 hours ago
          What's going to change in 2 years that would allow users to run 500B-800B parameter models on consumer hardware?
    • creamyhorror 4 hours ago
      No, the Deepseek V4 paper itself says that DS-V4-Pro-Max is close to Opus 4.5 in their staff evaluations, not better than 4.6:

      > In our internal evaluation, DeepSeek-V4-Pro-Max outperforms Claude Sonnet 4.5 and approaches the level of Opus 4.5.

    • doctoboggan 8 hours ago
      Is it honestly better than Opus 4.6 or just benchmaxxed? Have you done any coding with an agent harness using it?

      If its coding abilities are better than Claude Code with Opus 4.6 then I will definitely be switching to this model.

      • bokkies 6 hours ago
        Apparently glm5.1 and qwen coder latest is as good as opus 4.6 on benchmarks. So I tried both seriously for a week (glm Pro using CC) and qwen using qwen companion. Thought I could save $80 a month. Unfortunately after 2 days I had switched back to Max. The speed (slower on both although qwen is much faster) and errors (stupid layout mistakes, inserting 2 footers then refusing to remove one, not seeing obvious problems in screenshots & major f-ups of functionality), not being able to view URLs properly, etc. I'll give deepseek a go but I suspect it will be similar. The model is only half the story. Also been testing gpt5.4 with codex and it is very almost as good as CC... better on long running tasks running in background. Not keen on ChatGPT codex 'personality' so will stick to CC for the most part.
      • madagang 7 hours ago
        Their Chinese announcement says that, based on internal employee testing, it is not as good as Opus 4.6 Thinking, but is slightly better than Opus 4.6 without Thinking enabled.
        • mchusma 7 hours ago
          I appreciate this, makes me trust it more than benchmarks.
        • anentropic 1 hour ago
          Who uses Opus without thinking though...?
        • ibic 6 hours ago
          In case people wonder where the announcement is (you can easily translate it via browser if you don't read Chinese): https://mp.weixin.qq.com/s/8bxXqS2R8Fx5-1TLDBiEDg

          It's still a "preview" version atm.

        • deaux 7 hours ago
          That's super interesting, isn't Deepseek in China banned from using Anthropic models? Yet here they're comparing it in terms of internal employee testing.
          • computably 3 hours ago
            > That's super interesting, isn't Deepseek in China banned from using Anthropic models? Yet here they're comparing it in terms of internal employee testing.

            I don't see why Deepseek would care to respect Anthropic's ToS, even if just to pretend. It's not like Anthropic could file and win a lawsuit in China, nor would the US likely ban Deepseek. And even if the US gov would've considered it, Anthropic is on their shitlist.

          • renticulous 6 hours ago
            They use VPN to access. Even Google Deepmind uses Anthropic. There was a fight within Google as to why only DeepMind is allowed to Claude while rest of the Google can't.
    • NitpickLawyer 7 hours ago
      > (better than Opus 4.6)

      There we go again :) It seems we have a release each day claiming that. What's weird is that even deepseek doesn't claim it's better than opus w/ thinking. No idea why you'd say that but anyway.

      Dsv3 was a good model. Not benchmaxxed at all, it was pretty stable where it was. Did well on tasks that were ood for benchmarks, even if it was behind SotA.

      This seems to be similar. Behind SotA, but not by much, and at a much lower price. The big one is being served (by ds themselves now, more providers will come and we'll see the median price) at 1.74$ in / 3.48$ out / 0.14$ cache. Really cheap for what it offers.

      The small one is at 0.14$ in / 0.28$ out / 0.028$ cache, which is pretty much "too cheap to matter". This will be what people can run realistically "at home", and should be a contender for things like haiku/gemini-flash, if it can deliver at those levels.

      • slopinthebag 6 hours ago
        Anthropic fans would claim God itself is behind Opus by 3-6 months and then willingly be abused by Boris and one of his gaslighting tweets.

        LMAO

        • NitpickLawyer 6 hours ago
          > Anthropic fans ...

          I have no idea why you'd think that, but this is straight from their announcement here (https://mp.weixin.qq.com/s/8bxXqS2R8Fx5-1TLDBiEDg):

          > According to evaluation feedback, its user experience is better than Sonnet 4.5, and its delivery quality is close to Opus 4.6's non-thinking mode, but there is still a certain gap compared to Opus 4.6's thinking mode.

          This is the model creators saying it, not me.

    • bbor 7 hours ago
      For the curious, I did some napkin math on their posted benchmarks and it racks up 20.1 percentage point difference across the 20 metrics where both were scored, for an average improvement of about 2% (non-pp). I really can't decide if that's mind blowing or boring?

      Claude4.6 was almost 10pp better at at answering questions from long contexts ("corpuses" in CorpusQA and "multiround conversations" in MRCR), while DSv4 was a staggering 14pp better at one math challenge (IMOAnswerBench) and 12pp better at basic Q&A (SimpleQA-Verified).

      • Quasimarion 7 hours ago
        FWIW it's also like 10x cheaper.
    • sergiotapia 8 hours ago
      The dragon awakes yet again!
      • kindkang2024 7 hours ago
        There appears a flight of dragons without heads. Good fortune.

        That's literally what the I Ching calls "good fortune."

        Competition, when no single dragon monopolizes the sky, brings fortune for all.

    • rapind 8 hours ago
      Pop?
  • zargon 7 hours ago
    The Flash version is 284B A13B in mixed FP8 / FP4 and the full native precision weights total approximately 154 GB. KV cache is said to take 10% as much space as V3. This looks very accessible for people running "large" local models. It's a nice follow up to the Gemma 4 and Qwen3.5 small local models.
    • regularfry 55 minutes ago
      I'm going to blow my bandwidth allowance again this month, aren't I.
    • sbinnee 7 hours ago
      Price is appealing to me. I have been using gemini 3 flash mainly for chat. I may give it a try.

      input: $0.14/$0.28 (whereas gemini $0.5/$3)

      Does anyone know why output prices have such a big gap?

      • girvo 6 hours ago
        Output is what the compute is used for above all else; costs more hardware time basically than prompt processing (input) which is a lot faster
      • tokenmaxxinej 5 hours ago
        input tokens are processed at 10-50 times the speed of output tokens since you can process then in batches and not one at a time like output tokens
  • vinhnx 2 hours ago
    The king is back! I remember vividly being very amazed and having a deep appreciation reading DeepSeek's reasoning on Chat.DeepSeek.com, even before the DeepSeek moment in January later that year. I can't quite remember the date, but it's the most profound moment I have ever had. After OpenAI O1, no other model has “reasoning” capability yet. And DeepSeek opens the full trace for us. Seeing DeepSeek's “wait, aha…” moments is something hard to describe. I learned strategy and reasoning skills for myself also. I am always rooting for them.
    • buenolot 2 hours ago
      Instead of King DeepSeek we got DeepShit Clown
  • zkmon 6 hours ago
    They released 1.6 T pro base model on huggingface. First time I'm seeing a "T" model here.
    • mzl 4 hours ago
      Kimi K2.5 and K2.6 are both >1T
  • quadruple 3 hours ago
    In their paper, point 5.2.5 talks about their sandboxing platform(DeepSeek Elastic Compute). It seems like they have 4 different execution methods: function calls, container, microVM and fullVM.

    This is a pretty interesting thing they've built in my opinion, and not something I'd expect to be buried in the model paper like this. Does anyone have any details about it? Google doesn't seem to find anything of note, and I'd love to dive a bit deeper into DSec.

  • ghstinda 8 minutes ago
    so many models not enough time
  • sixhobbits 5 hours ago
    I know people don't like Twitter links here but the main link just goes to their main docs site generic 'getting started' page.

    The website now has a link to the announcement on Twitter here https://x.com/deepseek_ai/status/2047516922263285776

    Copying text of that below

    DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length.

    DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models.

    DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice.

    Try it now at http://chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today!

    Tech Report: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

    Open Weights: https://huggingface.co/collections/deepseek-ai/deepseek-v4

  • Imanari 6 hours ago
    Just tested it via openrounter in the Pi Coding agent and it regularly fails to use the read and write tool correctly, very disappointing. Anyone know a fix besides prompting "always use the provided tools instead of writing your own call"
    • rane 4 hours ago
    • whalesalad 31 minutes ago
      my experience with pi is that it’s terrible at doing anything
    • abstracthinking 5 hours ago
      They have just released it, give it some time, they probably haven't pretested it with Pi
      • Imanari 5 hours ago
        How can they fix it after the release? They would have to retrain/finetune it further, no?
        • zargon 5 hours ago
          It's only in preview right now. And anyway, yes, models regularly get updated training.

          But in this case, it's more likely just to be a tooling issue.

    • mark33vh 2 hours ago
      Yeah hope they fix this for PI
  • coderssh 5 hours ago
    Feels like the real story here is cost/performance tradeoff rather than raw capability. Benchmarks keep moving incrementally, but efficiency gains like this actually change who can afford to build on top.
  • simonw 7 hours ago
    I like the pelican I got out of deepseek-v4-flash more than the one I got from deepseek-v4-pro.

    https://simonwillison.net/2026/Apr/24/deepseek-v4/

    Both generated using OpenRouter.

    For comparison, here's what I got from DeepSeek 3.2 back in December: https://simonwillison.net/2025/Dec/1/deepseek-v32/

    And DeepSeek 3.1 in August: https://simonwillison.net/2025/Aug/22/deepseek-31/

    And DeepSeek v3-0324 in March last year: https://simonwillison.net/2025/Mar/24/deepseek/

    • JSR_FDED 7 hours ago
      No way. The Pro pelican is fatter, has a customized front fork, and the sun is shining! He’s definitely living the best life.
      • chronogram 6 hours ago
        The pro pelican is a work of art! It goes dimensions that no other LLM has gone before.
      • w4yai 7 hours ago
        yeah. look at these 4 feathers (?) on his bum too.
      • oliver236 7 hours ago
        a lot of dumplings
    • torginus 5 hours ago
      This is just a random thought, but have you tried doing an 'agentic' pelican?

      As in have the model consider its generated SVG, and gradually refine it, using its knowledge of the relative positions and proportions of the shapes generated, and have it spin for a while, and hopefully the end result will be better than just oneshotting it.

      Or maybe going even one step further - most modern models have tool use and image recognition capabilities - what if you have it generate an SVG (or parts/layers of it, as per the model's discretion) and feed it back to itself via image recognition, and then improve on the result.

      I think it'd be interesting to see, as for a lot of models, their oneshot capability in coding is not necessarily corellated with their in-harness ability, the latter which really matters.

      • simonw 5 hours ago
        I tried that for the GPT-5 launch - a self-improving loop that renders the SVG, looks at it and tries again - and the results were surprisingly disappointing.

        I should try it again with the more recent models.

        • torginus 3 hours ago
          I see, thanks. I guess most current models are not yet trained for this loop.

          Could you please try with Opus 4.7? I think there's a chance of it doing well, considering the design/vision focus.

    • nickvec 7 hours ago
      The Flash one is pretty impressive. Might be my favorite so far in the pelican-riding-a-bicycle series
    • murkt 6 hours ago
      DeepSeek pelicans are the angriest pelicans I’ve seen so far.
      • muyuu 40 minutes ago
        They're stressed pelicans from Hangzhou.
      • kristopolous 6 hours ago
        they're just late for work.
      • lazycatjumping 5 hours ago
        996 Pelican, lol
    • mikae1 6 hours ago
      Being a bicycle geometry nerd I always look at the bicycle first.

      Let me tell you how much the Pro one sucks... It looks like failed Pedersen[1]. The rear wheel intersects with the bottom bracket, so it wouldn't even roll. Or rather, this bike couldn't exist.

      The flash one looks surprisingly correct with some wild fork offset and the slackest of seat tubes. It's got some lowrider[2] aspirations with the small wheels, but with longer, Rivendellish[3], chainstays. The seat post has different angle than the seat tube, so good luck lowering that.

      [1] https://en.wikipedia.org/wiki/Pedersen_bicycle

      [2] https://en.wikipedia.org/wiki/Lowrider_bicycle

      [3] https://www.rivbike.com/

      • simonw 6 hours ago
        This is an excellent comment. Thanks for this - I've only ever thought about whether the frame is the right shape, I never thought about how different illustrations might map to different bicycle categories.
        • mikae1 6 hours ago
          Some other reactions:

          I wonder which model will try some more common spoke lacing patterns. Right now there seems to be a preference for radial lacing, which is not super common (but simple to draw). The Flash and Pro one uses 16 spoke rims, which actually exist[1] but are not super common.

          The Pro model fails badly at the spokes. Heck, the spokes sit on the outside of the drive side of the rim and tire. Have a nice ride riding on the spokes (instead of the tire) welded to the side of your rim.

          Both bikes have the drive side on the left, which is very very uncommon. That can't exist in the training data.

          [1] https://cicli-berlinetta.com/product/campagnolo-shamal-16-sp...

      • jojobas 6 hours ago
        The Pedersen looks like someone failed the "draw a bicycle" test and decided to adjust the universe.
    • catelm 6 hours ago
      I think the pelican on a bike is known widely enough that of seizes to be useful as a benchmark. There is even a pelican briefly appearing in the promo video of GPT-5, if I'm not mistaken https://openai.com/gpt-5/. So the companies are apparently aware of it.
    • brutal_chaos_ 6 hours ago
      What was your prompt for the image? Apologies if this should be obvious.
      • shawn_w 6 hours ago
        >Generate an SVG of a pelican riding a bicycle

        at the top of the linked pages.

    • nsoonhui 6 hours ago
      To me this is the perfect proof that

      1) LLM is not AGI. Because surely if AGI it would imply that pro would do better than flash?

      2) and because of the above, Pelican example is most likely already being benchmaxxed.

    • chvid 6 hours ago
      Is it then Deepseek hosted by Deepseek?

      How much does the drawing change if you ask it again?

    • ycui1986 7 hours ago
      I really like the pro version. The pelican is so cute.
    • theanonymousone 6 hours ago
      Where is the GPT 5.5 Pelican?
    • lobochrome 6 hours ago
      Why they so angry?
    • whateveracct 6 hours ago
      [flagged]
      • fastball 6 hours ago
        It's just Simon Willison (the person you are replying to) who always makes a pelican, as his personal flippant benchmark. It's not that deep.
      • dewey 6 hours ago
        No benchmark will be perfect, especially if it's public but it's a fun experiment to visually see how these models get better and better.
      • post-it 6 hours ago
        Why is it so wrong?
      • simonw 6 hours ago
        Thanks for the "scientific air" remark, that gave me a genuine LOL.
        • a96 4 hours ago
          "The difference between screwing around and science is writing it down" -- Adam Savage
    • EnPissant 6 hours ago
      This should not be the top comment on every model release post. It's getting tiring.
      • blitzar 6 hours ago
        This should be the bottom comment on the pelican comment on every model release post.
        • EnPissant 5 hours ago
          Clearly the top comment should be "Imagine a beowulf cluster of Deepseek v4!"
  • jessepcc 8 hours ago
    At this point 'frontier model release' is a monthly cadence, Kimi 2.6 Claude 4.6 GPT 5.5, the interesting question is which evals will still be meaningful in 6 months.
    • mixtureoftakes 4 hours ago
      more like weekly or almost daily, gpt 5.5 was literally 12 hours ago
  • Aliabid94 8 hours ago
    MMLU-Pro:

    Gemini-3.1-Pro at 91.0

    Opus-4.6 at 89.1

    GPT-5.4, Kimi2.6, and DS-V4-Pro tied at 87.5

    Pretty impressive

    • ant6n 7 hours ago
      Funny how Gemini is theoretically the best -- but in practice all the bugs in the interface mean I don't want to use it anymore. The worst is it forgets context (and lies about it), but it's very unreliable at reading pdfs (and lies about it). There's also no branch, so once the context is lost/polluted, you have to start projects over and build up the context from scratch again.
      • spaceman_2020 5 hours ago
        The sheer number of bugs and lack of meaningful improvements in Google products is a clear counterargument to the AI bull thesis

        If AI was so good at coding, why can’t it actually make a usable Gemini/AI Studio app?

        • barnabee 4 hours ago
          I think Google might just be institutionally incapable of making good UX
      • hodgehog11 3 hours ago
        Most of these tests are one-prompt in nature. I've also noticed issues with the PDF reader in Gemini which was very frustrating, although it is significantly better now than it was even two weeks ago. On the contrary, now GPT-5 seems to be giving me issues.

        In my experience, Gemini is the most insightful model for hard problems (particularly math problems that I work on).

      • lazycatjumping 5 hours ago
        I gave up on Gemini 3.1 Pro in VSCode after 2 hours. They fully refunded me.
      • esperent 5 hours ago
        Yeah if I could use Gemini with pi.dev that would be my choice. But Gemini CLI is just so, so bad.
  • rohanm93 6 hours ago
    This is shockingly cheap for a near frontier model. This is insane.

    For context, for an agent we're working on, we're using 5-mini, which is $2/1m tokens. This is $0.30/1m tokens. And it's Opus 4.6 level - this can't be real.

    I am uncomfortable about sending user data which may contain PII to their servers in China so I won't be using this as appealing as it sounds. I need this to come to a US-hosted environment at an equivalent price.

    Hosting this on my own + renting GPUs is much more expensive than DeepSeek's quoted price, so not an option.

    • esperent 5 hours ago
      > I am uncomfortable about sending user data which may contain PII to their servers in China

      As a European I feel deeply uncomfortable about sending data to US companies where I know for sure that the government has access to it.

      I also feel uncomfortable sending it to China.

      If you'd asked me ten years ago which one made me more uncomfortable. China.

      But now I'm not so sure, in fact I'm starting to lean towards the US as being the major risk.

    • fractalf 6 hours ago
      Right now Im much more worried about sending data to the US and A.. At least theres a less chanse it will be missused against -me-
    • swiftcoder 4 hours ago
      > For context, for an agent we're working on, we're using 5-mini, which is $2/1m tokens. This is $0.30/1m tokens. And it's Opus 4.6 level - this can't be real.

      It's doesn't seem all that out there compared to the other Chinese model price/performance? Kimi2.6 is cheaper even than this, and is pretty close in performance

      • rohanm93 3 hours ago
        Kimi is indeed somewhat cheap for frontier-level intelligence, but still is $4-5 per mm tokens. Deep Seek is at least an order of magnitude cheaper.
        • swiftcoder 2 hours ago
          Oh, right you are. I misread where the decimal place was in the Deepseek pricing. That is incredibly cheap
  • gardnr 6 hours ago
    865 GB: I am going to need a bigger GPU.
  • lifeisstillgood 4 hours ago
    On a seperate note, I am guessing that all the new models have announced in the space of a few days because the time to train a model is the same for each AI company.

    Which strikes me as odd - Inwoukd have assumed someone had an edge in terms of at least 10% extra GPUs.

    • namenotrequired 3 hours ago
      But why would they all start at the same time?
      • lifeisstillgood 3 hours ago
        Because they all (if my memory serves) did this release at the same time thing last time. I have not looked into it but I am guessing that not letting one model pull ahead for a month means everyone keeps up - which implies the “stickiness” of any one model is a lot lower than we think
  • CJefferson 6 hours ago
    What's the current best framework to have a 'claude code' like experience with Deepseek (or in general, an open-source model), if I wanted to play?
    • deaux 6 hours ago
    • TranquilMarmot 6 hours ago
    • whoopdeepoo 6 hours ago
      You can use deepseek with Claude code
      • esperent 5 hours ago
        You can, but does it work well? I assume CC has all kinds of Claude specific prompts in it, wouldn't you be better with a harness designed to be model agnostic like pi.dev or OpenCode?
        • rane 4 hours ago
          I've been using all Kimi K2.6, gpt-5.4 and now Deepseek v4 (thought not extensively yet) in Claude Code and I can say it works much better than you'd expect. It looks like the system prompt and tools are pulling a lot of weight. Maybe the current models are good enough that you don't need them to be trained for a specific harness.
    • Alifatisk 5 hours ago
      You can use CC with other models, you aren’t forced to use Claude model.
    • 0x142857 6 hours ago
      claude-code-cli/opencode/codex
  • yanhangyhy 2 hours ago
    somehow i canot open the link. but in their chinese version's release article, in the end ,there is a quote from xunzi(https://en.wikipedia.org/wiki/Xunzi_(philosopher))

    "Not seduced by praise, not terrified by slander; following the Way in one's conduct, and rectifying oneself with dignity." (不诱于誉,不恐于诽,率道而行,端然正己)

    (It is mainly used to express the way a Confucian gentleman conducts himself in the world. It reminds me of an interview I once watched with an American politician, who said that, at its core, China is still governed through a Confucian meritocratic elite system. It seems some things have never really changed.

    In some respects, Liang Wenfeng can be compared to Linux. The political parallel here is that the advantages of rational authoritarianism are often overlooked because of the constraints imposed by modern democratic systems. )

    • muyuu 36 minutes ago
      Sounds a lot like taoism, but i guess there's overlap
  • storus 6 hours ago
    Oh well, I should have bought 2x 512GB RAM MacStudios, not just one :(
    • muyuu 33 minutes ago
      Unironically curious about the performance of this model on unified VRAM machines.
  • Oxlamarr 2 hours ago
    The speed of progress here is wild. It feels like the hard part is shifting from having access to a strong model to actually building trustworthy systems around it.
  • thefounder 2 hours ago
    They still don’t support json schema or batch api. It’s like deepseek does not want to make money
    • kiproping 2 hours ago
      What do you currently use for json and batch, I was doing some analysis and my results show that gpt-oss-120b (non batch via openrotuer) is the best for now for my use case, better than gemini-flash models (batch on google). How is your experience?
  • luyu_wu 8 hours ago
    For those who didn't check the page yet, it just links to the API docs being updated with the upcoming models, not the actual model release.
  • Grp1 2 hours ago
    DeepSeek’s docs say V4 has a 1M context length. Is that actually usable in practice, or just the model/API limit?

    Codex shows ~258k for me and Claude Code often shows ~200k, so I’m curious how DeepSeek is exposing such a large window.

    • lucrbvi 2 hours ago
      They have added a lot of optimization focussing on the KV-cache, so they can have a much larger window without eating all the VRAM.

      The 1M window might be usable, but it will probably underperform against a smaller window of course.

  • bandrami 6 hours ago
    I don't mind that High Flyer completely ripped off Anthropic to do this so much as I mind that they very obviously waited long enough for the GAB to add several dozen xz-level easter eggs to it.
    • cedws 1 hour ago
      He who is a ripper off-er cannot be ripped off.
  • nba456_ 1 hour ago
    Wow, never seen a post with so many comments posted overnight like this.
  • jdeng 8 hours ago
    Excited that the long awaited v4 is finally out. But feel sad that it's not multimodal native.
  • aquir 5 hours ago
    It is great! I asked the question what I always ask of new models ("what would Ian M Banks think about the current state of AI") and it gave me a brilliant answer! Funny enough the answer contained multiple criticisms of his own creators ("Chinese state entities", "Social Credit System").
  • yanis_t 5 hours ago
    Is there a harness that is as good as cloud code that can be used with open weight models?
    • laurentiurad 2 hours ago
      Try Opencode or Comrade. Both OSS and working great with OSS models too.
    • barnabee 4 hours ago
      I prefer OpenCode over Claude Code, and it works with basically everything. Give it a try. ymmv
    • Numerlor 4 hours ago
      I've liked Hermes agent, but never used Claude code so don't know how it compares
    • sixhobbits 5 hours ago
      Try pi coding agent!
    • npodbielski 4 hours ago
      Never used Claude myself but there are agents that can use local model. I.e. - Jetbrains Junie - Mistral Vibe
  • dannyw 3 hours ago
    Are there better providers for inferencing this right now? I know it's launch day, but openrouter showing 30tps isn't looking great.
  • xnx 6 hours ago
    Such different time now than early 2025 when people thought Deepaeek was going to kill the market for Nvidia.
    • antirez 3 hours ago
      Actually the fact the inference of a SOTA model is completely Nvidia-free is the biggest attack to Nvidia every carried so far. Even American frontier AI labs may start to buy Chinese hardware if they need to continue the AI race, they can't keep paying so much money for the GPUs, especially once Huawei training versions of their GPUs will ship.
      • eunos 1 hour ago
        That's like saying Raytheon would outsource building drones from Saheed makers (don't know who exactly).

        Not gonna happen

    • Ifkaluva 5 hours ago
      They might still kill the market for NVIDIA, if future releases prioritize Huawei chips
  • taosx 8 hours ago
  • clark1013 6 hours ago
    Looking forward to DeepSeek Coding Plan
  • fbrncci 1 hour ago
    Take that Anthropic and your shenanigans.
  • jfxia 5 hours ago
    Is V4 still not a multi-modal model?
    • vitorgrs 5 hours ago
      Not yet... Which is a shame.
  • namegulf 7 hours ago
    Is there a Quantized version of this?
    • mordae 4 hours ago
      They have released mixed fp8/fp4 for efficiency. It's still hundreds of gigabytes, though. Give up on local for these.
  • JonChesterfield 3 hours ago
    Anyone worked out how much hardware one needs to self host this one?
  • GuardCalf 3 hours ago
    I like this. The more competitors there are, the more we the users benefit.
  • sibellavia 6 hours ago
    A few hours after GPT5.5 is wild. Can’t wait to try it.
  • KaoruAoiShiho 8 hours ago
    SOTA MRCR (or would've been a few hours earlier... beaten by 5.5), I've long thought of this as the most important non-agentic benchmark, so this is especially impressive. Beats Opus 4.7 here
  • apexalpha 5 hours ago
    This FLash model might be affordable for OpenClaw. I run it on my mac 48gb ram now but it's slowish.
  • reenorap 7 hours ago
    Which version fits in a Mac Studio M3 Ultra 512 GB?
  • swrrt 8 hours ago
    Any visualised benchmark/scoreboard for comparison between latest models? DeepSeek v4 and GPT-5.5 seems to be ground breaking.
  • aliljet 7 hours ago
    How can you reasonably try to get near frontier (even at all tps) on hardware you own? Maybe under 5k in cost?
    • revolvingthrow 6 hours ago
      For flash? 4 bit quant, 2x 96GB gpu (fast and expensive) or 1x 96GB gpu + 128GB ram (still expensive but probably usable, if you’re patient).

      A mac with 256 GB memory would run it but be very slow, and so would be a 256GB ram + cheapo GPU desktop, unless you leave it running overnight.

      The big model? Forget it, not this decade. You can theoretically load from SSD but waiting for the reply will be a religious experience.

      Realistically the biggest models you can run on local-as-in-worth-buying-as-a-person hardware are between 120B and 200B, depending on how far you’re willing to go on quantization. Even this is fairly expensive, and that’s before RAM went to the moon.

      • zargon 6 hours ago
        Flash is less than 160 GB. No need to quantize to fit in 2x 96 GB. Not sure how much context fits in 30 GB, but it should be a good amount.
        • redrove 6 hours ago
          It seems to be 160GB at mixed FP4+FP8 precision, FYI. Full FP8 is 250GB+. (B)F16 at around double I would assume.
          • zargon 6 hours ago
            There is no BF16. There is no FP8 for the instruct model. The instruct model at full precision is 160 GB (mixed FP4 and FP8). The base model at full precision is 284 GB (FP8). Almost everyone is going to use instruct. But I do love to see base models released.
    • mordae 3 hours ago
      Look at GB/s.

      Strix halo has 256 GB/s bandwidth for $2500. The Flash model has 13 GB activations.

      256 / 13 = 19.6 tokens per second

      Except you cannot fit it into the maximum RAM of 128 GB Strix Halo supports. So move on.

      Another option is Threadripper. That's 8 memory channels. Using older DDR4-3200 you get roughly 200 GB/s. For $2000.

      200 / 13 = 15.4 tokens per second

      But, a chunk of per-token weights is actually always the same and not MoE, so you would offload that to a GPU and get a decent speedup. Say 25 tokens per second total.

      Then likely some expensive Mac. No idea.

      Eventually you arrive at a mining rig chassis with a beefy board and multiple GPUs. That has the benefit of pipelining. You run part of the model on one GPU and move on, so another batch can start on the first one. Low (say 30-100) tps individually, but a lot more in parallel. Best get it with other people.

    • awakeasleep 7 hours ago
      The same way you fit a bucket wheel excavator in your garage
      • floam 6 hours ago
        Very carefully
    • zozbot234 6 hours ago
      Run on an old HEDT platform with a lot of parallel attached storage (probably PCIe 4) and fetch weights from SSD. You'd ultimately be limited by the latency of these per-layer fetches, since MoE weights are small. You could reduce the latencies further by buying cheap Optane memory on the second-hand market.
    • datadrivenangel 6 hours ago
      A loaded macbook pro can get you to the frontier from 24 months ago at ~10-40tok/s, which is plenty fast enough for regular chatting.
    • 542458 6 hours ago
      The low end could be something like an eBay-sourced server with a truckload of DDR3 ram doing all-cpu inference - secondhand server models with a terabyte of ram can be had for about 1.5K. The TPS will be absolute garbage and it will sound like a jet engine, but it will nominally run.

      The flash version here is 284B A13B, so it might perform OK with a fairly small amount of VRAM for the active params and all regular ram for the other params, but I’d have to see benchmarks. If it turns out that works alright, an eBay server plus a 3090 might be the bang-for-buck champ for about $2.5K (assuming you’re starting from zero).

    • jdoe1337halo 7 hours ago
      More like 500k
  • cztomsik 5 hours ago
    So is this the first AI lab using MUON for their frontier model?
  • WhereIsTheTruth 5 hours ago
    Interesting note:

    "Due to constraints in high-end compute capacity, the current service capacity for Pro is very limited. After the 950 supernodes are launched at scale in the second half of this year, the price of Pro is expected to be reduced significantly."

    So it's going to be even cheaper

  • mariopt 7 hours ago
    Does deepseek has any coding plan?
  • raincole 7 hours ago
    History doesn't always repeat itself.

    But if it does, then in the following week we'll see DeepSeek4 floods every AI-related online space. Thousands of posts swearing how it's better than the latest models OpenAI/Anthropic/Google have but only costs pennies.

    Then a few weeks later it'll be forgotten by most.

    • sbysb 7 hours ago
      It's difficult because even if the underlying model is very good, not having a pre-built harness like Claude Code makes it very un-sticky for most devs. Even at equal quality, the friction (or at least perceived friction) is higher than the mainstream models.
      • raincole 7 hours ago
        OpenCode? Pi?

        If one finds it difficult to set up OpenCode to use whatever providers they want, I won't call them 'dev'.

        The only real friction (if the model is actually as good as SOTA) is to convince your employer to pay for it. But again if it really provides the same value at a fraction of the cost, it'll eventually cease to be an issue.

        • throwa356262 6 hours ago

              "If one finds it difficult to set up OpenCode to use whatever providers they want, I won't call them 'dev'."
          
          
          I feel the same way. But look at the ollama vs llama.cpp post from HN few days back and you will see most of the enthusiasts in this space are very non technical people.
          • zargon 5 hours ago
            I think you mean ollama vs llama.cpp.
      • 2ndorderthought 58 minutes ago
        You can literally run it from Claude code. Easily too
      • cmrdporcupine 7 hours ago
        They have instructions right on their page on how to use claude code with it.
    • slopinthebag 6 hours ago
      [flagged]
  • cl08 4 hours ago
    Any way to connect this to claude code?
  • tcbrah 6 hours ago
    giving meta a run for its money, esp when it was supposed to be the poster child for OSS models. deepseek is really overshadowing them rn
    • alpineman 4 hours ago
      Meta is totally directionless
  • rvz 8 hours ago
    The paper is here: [0]

    Was expecting that the release would be this month [1], since everyone forgot about it and not reading the papers they were releasing and 7 days later here we have it.

    One of the key points of this model to look at is the optimization that DeepSeek made with the residual design of the neural network architecture of the LLM, which is manifold-constrained hyper-connections (mHC) which is from this paper [2], which makes this possible to efficiently train it, especially with its hybrid attention mechanism designed for this.

    There was not that much discussion around it some months ago here [3] about it but again this is a recommended read of the paper.

    I wouldn't trust the benchmarks directly, but would wait for others to try it for themselves to see if it matches the performance of frontier models.

    Either way, this is why Anthropic wants to ban open weight models and I cannot wait for the quantized versions to release momentarily.

    [0] https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

    [1] https://news.ycombinator.com/item?id=47793880

    [2] https://arxiv.org/abs/2512.24880

    [3] https://news.ycombinator.com/item?id=46452172

    • jeswin 7 hours ago
      > this is why Anthropic wants to ban open weight models

      Do you have a source?

      • louiereederson 6 hours ago
        More like he wants to ban accelerator chip sales to China, which may be about “national security” or self preservation against a different model for AI development which also happens to be an existential threat to Anthropic. Maybe those alternatives are actually one and the same to him.
  • sergiotapia 6 hours ago
    Using it with opencode sometimes it generates commands like:

        bash({"command":"gh pr create --title "Improve Calendar module docs and clean up idiomatic Elixir" --body "$(cat <<'EOF'
        Problem
        The Calendar modu...
    
    like generating output, but not actually running the bash command so not creating the PR ultimately. I wonder if it's a model thing, or an opencode thing.
  • cubefox 3 hours ago
    Abstract of the technical report [1]:

    > We present a preview version of DeepSeek-V4 series, including two strong Mixture-of-Experts (MoE) language models — DeepSeek-V4-Pro with 1.6T parameters (49B activated) and DeepSeek-V4-Flash with 284B parameters (13B activated) — both supporting a context length of one million tokens. DeepSeek-V4 series incorporate several key upgrades in architecture and optimization: (1) a hybrid attention architecture that combines Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA) to improve long-context efficiency; (2) Manifold-Constrained Hyper-Connections (mHC) that enhance conventional residual connections; (3) and the Muon optimizer for faster convergence and greater training stability. We pre-train both models on more than 32T diverse and high-quality tokens, followed by a comprehensive post-training pipeline that unlocks and further enhances their capabilities. DeepSeek-V4-Pro-Max, the maximum reasoning effort mode of DeepSeek-V4-Pro, redefines the state-of-the-art for open models, outperforming its predecessors in core tasks. Meanwhile, DeepSeek-V4 series are highly efficient in long-context scenarios. In the one-million-token context setting, DeepSeek-V4-Pro requires only 27% of single-token inference FLOPs and 10% of KV cache compared with DeepSeek-V3.2. This enables us to routinely support one-million-token contexts, thereby making long-horizon tasks and further test-time scaling more feasible. The model checkpoints are available at https://huggingface.co/collections/deepseek-ai/deepseek-v4.

    1: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

  • zurfer 4 hours ago
    lots of great stuff, but the plot in the paper is just chart crime. different shades of gray for references where sometimes you see 4 models and sometimes 3.
  • tariky 6 hours ago
    Anyone tried with make web UI with it? How good is it? For me opus is only worth because of it.
  • casey2 3 hours ago
    Already over a billion tokens on open router in under 5 hours
  • augment_me 5 hours ago
    Amaze amaze amaze
  • ls612 7 hours ago
    How long does it usually take for folks to make smaller distills of these models? I really want to see how this will do when brought down to a size that will run on a Macbook.
    • simonw 7 hours ago
      Unsloth often turn them around within a few hours, they might have gone to bed already though!

      Keep an eye on https://huggingface.co/unsloth/models

      Update ten minutes later: https://huggingface.co/unsloth/DeepSeek-V4-Pro just appeared but doesn't have files in yet, so they are clearly awake and pushing updates.

    • inventor7777 7 hours ago
      Weren't there some frameworks recently released to allow Macs to stream weights from fast SSDs and thus fit way more parameters than what would normally fit in RAM?

      I have never tried one yet but I am considering trying that for a medium sized model.

      • simonw 7 hours ago
        I've been calling that the "streaming experts" trick, the key idea is to take advantage of Mixture of Expert models where only a subset of the weights are used for each round of calculations, then load those weights from SSD into RAM for each round.

        As I understand it if DeepSeek v4 Pro is a 1.6T, 49B active that means you'd need just 49B in memory, so ~100GB at 16 bit or ~50GB at 8bit quantized.

        v4 Flash is 284B, 13B active so might even fit in <32GB.

        • zozbot234 6 hours ago
          The "active" count is not very meaningful except as a broad measure of sparsity, since the experts in MoE models are chosen per layer. Once you're streaming experts from disk, there's nothing that inherently requires having 49B parameters in memory at once. Of course, the less caching memory does, the higher the performance overhead of fetching from disk.
        • zargon 6 hours ago
          > ~100GB at 16 bit or ~50GB at 8bit quantized.

          V4 is natively mixed FP4 and FP8, so significantly less than that. 50 GB max unquantized.

        • inventor7777 7 hours ago
          Ahh, that actually makes more sense now. (As you can tell, I just skimmed through the READMEs and starred "for later".)

          My Mac can fit almost 70B (Q3_K_M) in memory at once, so I really need to try this out soon at maybe Q5-ish.

        • EnPissant 6 hours ago
          Streaming weights from RAM to GPU for prefill makes sense due to batching and pcie5 x16 is fast enough to make it worthwhile.

          Streaming weights from RAM to GPU for decode makes no sense at all because batching requires multiple parallel streams.

          Streaming weights from SSD _never_ makes sense because the delta between SSD and RAM is too large. There is no situation where you would not be able to fit a model in RAM and also have useful speeds from SSD.

      • zozbot234 6 hours ago
        These are more like experiments than a polished release as of yet. And the reduction in throughput is high compared to having the weights in RAM at all times, since you're bottlenecked by the SSD which even at its fastest is much slower than RAM.
      • the_sleaze_ 7 hours ago
        Do you have the links for those? Very interested
  • gigatexal 5 hours ago
    Has anyone used it? How does it compare to gpt 5.5 or opus 4.7?
  • coolThingsFirst 5 hours ago
    I got an API key without credit card details I didn’t know they had a free plan.
  • luew 6 hours ago
    We will be hosting it soon at getlilac.com!
  • punkpeye 6 hours ago
    Incredible model quality to price ratio
  • frozenseven 7 hours ago
  • donbreo 4 hours ago
    Aaaand it cant still name all the states in India,or say what happened in 1989
    • mordae 4 hours ago
      Ask Claude how to overthrow a Nazi dictatorship in the US.
  • hongbo_zhang 7 hours ago
    congrats
  • dhruv3006 7 hours ago
    Ah now !
  • creamyhorror 7 hours ago
    [dead]
  • hubertzhang 7 hours ago
    [dead]
  • Razengan 1 hour ago
    [dead]
  • maryjeiel 7 hours ago
    [dead]
  • slopinthebag 6 hours ago
    [flagged]
  • minhajulmahib 7 hours ago
    [flagged]
    • polski-g 7 hours ago
      Why did you bother to submit an AI comment?
      • sidcool 7 hours ago
        I suspect you may have replied to a bot. Dead internet theory
  • shafiemoji 8 hours ago
    I hope the update is an improvement. Losing 3.2 would be a real loss, it's excellent.