There's a very real possibility that AI proponents completely lose the next generation of adults. The output is not enjoyable to consume, the people who rely on it are not cool, and the effects of using it are unpleasant and hard to defend on aesthetic, intellectual, or moral grounds.
There are real use cases for this technology! But the idea that the generation of superficially plausible text is "the next Industrial Revolution" comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees. We desperately need some leadership in companies or institutions that can place this technology in its proper context, and leverage it without getting manic about it.
Social media isn’t always about consuming content. It’s also about getting jolts of momentary joy and reward. You get those in two ways: seeing cool things, and participating in cool things. Especially cool things before they go viral. Clicking like on a post that isn’t viral yet, and gambling to yourself whether it will go viral, has the same dopamine flux when it pays off as winning at the slots. Even my reward-defective brain manages to eke out a moment of reward from that. So if you simply remove the content, what’s left is the gambling market. Gambling on something you upvote going viral isn’t about how much content there is in what you placed your bets on, it’s about being able to have that special knowing look when someone tells you about it because you’ve just won the socio-memetic lottery. And AI isn’t doing anything whatsoever to stop that reward loop.
I proposed once a while back that we should have the HN admins strip all integer counts for a week server-side, to see if the site quality improved or worsened during that time. The mods suggested I ask HN, so I did. HN loathed the idea of it, for every possible reason except this one: removing all those integers would be like quitting gambling cold turkey after years of pulling the vote lever every day. I’m not much less vulnerable to this than everyone else, but I still want to see it happen someday. I remain reasonably confident that our social media site’s quality would skyrocket after a couple days of our posts and comments being disinfected of make-integer-go-up jackpots.
There's the classic "I wish facebook had a dislike button" or the equivalent for twitter.
But in the thread-based forum context, removing the downvote has interesting effects. For one, it stops people who down-vote-brigade to lower visibility. It also stops the "I don't like that guy" engagement and works on a more positive "I appreciated this comment" mode.
It's not one-size fits all but I've seen positive effects on more marginalized forums.
The content being untrustworthy doesn't matter when it comes to social media, as most of what is enticing about social media nowadays isn't the content of the content. It's the fact that there is a never-ending stream of content specifically catered to maximize your dopamine to keep you scrolling.
So much of social media nowadays is just low quality clips of TV shows/movies with an AI-generated song over them. Or the same Minecraft parkour map as an AI voice recites an r/AmITheAsshole post. Or AI-generated funny videos. The quality of the content doesn't matter at all.
Anyone I've talked to about how it was all just AI just responds with something akin to "I don't care if it's AI, it's funny! Let people enjoy things!"
Isn't it bad now that Sam Altman and the others are backpedaling on this and going "jobs are going to still exist you just can't imagine them!" because the PR problem was getting so big? [1]
Like don't we want people running these companies to be honest to the public rather than misdirection?
> "jobs are going to still exist you just can't imagine them!"
Ironically, this makes even less sense.
If (ostensibly) the goal of developing LLMs was so we can all create more while working less, but he also assures us there will be just as much work in the future, then what was the point of this tech in the first place?
> This is a good instinct: one of the virtues of democracy is the way that it gives people a feeling of control over their own lives. People who believe that they can rein in AI companies through votes and laws and regulations will be much less likely to turn to violence.
I like how this is entirely put in terms of "feelings" and "beliefs" with the ultimate goal being to keep people from resorting to violence. It doesn't seem to play any role how much control people actually have.
I think before he thought OpenAI was going to make him a trillionaire he was more honest about X Risk and job displacement since he didn't have the incentive to lie. Most early AI thinkers saw AI as more dangerous than nukes.
> We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. [1]
We are about to experience the commoditization of intellectual work, in much the same way the Industrial Revolution commoditized manual production. I don’t expect a Musk-esque abundance utopia this decade, but the impact will exceed anything we’ve seen in centuries. There is not an industry on earth that won’t change in the next few decades.
To conceptualize AI as merely “superficially plausible text” would be like writing off a Watt steam engine in 1776. The current AI bubble might be early, but it won’t be wrong. The fervor with which corporations are exploring the space stems not from misplaced optimism but an existential threat. Right now every industry is vulnerable to disruption on a massive scale.
And we’re still in the early stages. Frontier models like Claude or GPT-5.5 are still just tuning 2017’s “Attention is All You Need” with MoE, RLHF, and more compute. We are roughly where online services were in the early 90s, when Prodigy and CompuServe were battling it out for market share before the open web swept them aside.
We are still waiting for the modern equivalents of Yahoo, Google, Amazon, and Facebook, never mind the lessers. As Tim Berners-Lee said of the web: “we have not seen it yet. The future is still so much bigger than the past.”
IMHO shrugging it off as “superficially plausible text” is the extreme to the other side.
We’re past plausible text since GPT-2 and it’s undeniable that the technology is making waves right now and is having an impact.
As you can’t judge the impact of the Industrial Revolution by the first steam engines, you can’t dismiss the impact the technology is having right now.
What's the point of listening to purely AI-generated music?
I don't mean music that has AI-generated stems as part of an arrangement, where a human actually created it and used AI for bits and pieces, I don't see absolutely any point on listening to purely AI-generated music. The fundamental essence of music is emotion, listening to something generated without emotion has no point, it might sound good but it's hollow and devoid of meaning.
I've tried to listen to it, it doesn't even make me "sad", it makes me feel... Nothing. I'm a hobby musician and I incorporated some AI-generated parts in some tracks where I mangled/processed them but my idea was exactly to express how hollow AI-generated music is without the human aspect.
Many music that are in autoplay on Spotify are AI and I literally didn't know until I checked, the emotion was triggered successfully, I don't really see why only a human could be able to trigger you an emotion? Like if I'm at a party, let say I don't know the artist and everything is AI made and everybody is vibing, then what's "wrong" with it?
I think this is more of a musician side which I respect, but a lot of people would simply not care who created it (or what).
Most people don't care about music, as most don't care about art in general. People like entertainment though.
What you are describing is more akin to a form of hollow entertainment through the medium of music, a lot of pop music can also fall into that category (no, not all, there is also a lot of artistry is many pop artists/songs).
If AI-generated music triggers emotions on you then keep consuming it but knowing that it's a hollow form of the art, there's no one on the other side communicating with you, it's basically like having a conversation with a chatbot, it might sound human but you know that there's no one on the other side listening to you. AI music is the other way: there's no one on the other side telling you a story, or a feeling they went through, it's just a mimesis of it.
I have. It's overly polished, formulaic and dull. It's devoid of any of the qualities that make music interesting. There's nothing a human is trying to communicate. Perhaps it could be used as elevator or hold music.
I agree, it's shockingly good these days; we can argue about morality etc, fine, but burying one's head in the sand and claiming it's bad puts you at odds with reality, which isn't a good place to be.
It's pretty silly that so many people take as an axiom that the human brain basically has a monopoly on certain patterns of electrical signals, and have semi-religious beliefs that this will always be the case.
It's not that AI can't convince a novice that what comes out is passible.
It's that experts in a field generally agree that what comes out is insidiously hollow garbage.
This isn't a "semi-religious" belief. It's linear token soup and diffusion bakes running headfirst into actual expertise, second and third order effects, refined skill and taste, and so on.
If you actually want to see civilization advance, you cannot rely on machines that merely mash up existing intellectual output while pretending to have expertise.
We already had that in the form of art school avant-gardism. AI is just style transfer of that, with corporate sycophancy and valley hyperbole as a veneer.
But you really believe it will stay that way? What do you think models will be 10 years from now? (not only models, we must include processes and tools in it) - developers were thinking this until recently there is some sort of sudden switch where "shit, it's good enough" and then pass this in a 50x loop and suddenly it becomes "shit, it's actually great" which proves it's a matter of time imo before it's not hollow garbage but actually innovative and expert in its field.
I still think you are missing entirely the point about music or any art in general.
It doesn't matter how technically innovative, or how much expertise, a model has, while an AI is not a consciousness that can express itself it will be hollow. There's no way around that.
If some form of AI becomes conscious, and can express itself through whatever art form it conjures for that, why would it even use music? Music is human, it's tuned to how our brains work and perceive sounds, I'd be much more interested to discover what art forms another form of consciousness that we can commuicate with can come up on its own.
I can't fully agree with the hollow part, when AI resonate with me about real-life issues (I understand it's just a machine without thoughts) it's pretty expressive and spot-on, and genuinely useful. I don't really see why it couldn't be the same with music, it can already write completely unique pieces that are very entertaining and full of emotions (even tho they are "fake")...
The brain perceiving sounds a certain way in the end is just data, that can be mapped as well, an AI can make us laugh right because it understands speech really well (and will be a thousand time better someday), what's the actual difference with music?
Let me give you another example, there is some Meme about older folks getting bamboozled by AI images right (especially doomsday stuff) which proves that it does trigger them genuine emotions, what's the difference if that image does actually exist or not (or let say a human photographed it).
support of all kind (including voice), marketing, real-estate, financial... yes, a ton of fields are being very impacted right now but right now doesn't even matter, what matter is what we know it will reach as theory will become practice.
Generally, people don't care about "fields being impacted", and the students certainly don't. People care about the impact certain technology has on their daily lives, on their welfare and the ability to pay off their mortgage and provide a decent life for their children.
The AI as it is today isn't really doing any of those things. At most, it's a sort of reliable replacement fot Google Search. Worden ehen, it's being presented as threat to all those things the people care about.
“In producing textiles but has there been actual positive impact in other sectors?”
I’m sure the Industrial Revolution didn’t just happen all at once, it started somewhere and crept.
> comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees
I'm going to say up front that I'm not as familiar with this period of history as I should be, but -- would it be totally unfair to say the same of the "Industrial Revolution"?
I'm not gonna say they're equivalent by any means, but my understanding is the "Industrial Revolution" was hellish for many people. Maybe the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?
> the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?
They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.
The Industrial Revolution was good. But it also required erecting the modern administrative state to manage. People had to soberly measure the problems, weigh the benefits and risks, and then invent new institutions and ways of thinking to accommodate the new world.
It was good on a long time scale, but I think the parent poster refers to the short term. If I recall correctly, during the early Industrial Revolution the average life span decreased, child mortality went through the roof, and malnutrition meant adults lost their teeth in their early 20s at best. That was… worse. It took time for the revolution to become a net-positive for the average person (which I certainly wouldn’t dispute).
> They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.
That happened in the Second Industrial Revolution. The First Industrial Revolution was much less comfortable for both workers (who were given much worse working conditions) and the aristocracy (whose landholdings were much less valuable) - it was the middle class who benefited.
> The Industrial Revolution was good.
The outcomes of the Industrial Revolutions were good. The experience of living through those revolutions was mixed.
How about if you were a working class child, just before they started in a mine or a textile mill? Was it good for them?
Infant deaths decreased for a while (and NOT because of the industrial revolution):
> These patterns are better explained by changes in breastfeeding practices and the prevalence or virulence of particular pathogens than by changes in sanitary conditions or poverty[1]
then rose:
>Mortality at ages 1-4 years demonstrated a more complex pattern, falling between 1750 and 1830 before rising abruptly in the mid-nineteenth century.
[1] Davenport, Romola J. (2021). "Mortality, migration and epidemiological change in English cities, 1600–1870." International Journal of Paleopathology, 34, 37–49. PMC7611108.
The public can't see any trains, electricity, concrete or glass windows, they see employment going away as workers and zero benefit as consumers.
Maybe AI enables great inventions in a decade, but for now the only appeal is that multinational corporations get to fire workers and everything's filled with slop. Of course they're not happy.
> There's a very real possibility that AI proponents completely lose the next generation of adults.
The college-age students I interact with hate AI content from other people, but they love using AI for their own work.
They'll pump AI generated memes and AI altered images all day long. Then they'll use ChatGPT to do their homework and write their resume, then look for an AI tool that will spam apply to jobs for them. Then when they get the job they plan to use ChatGPT to level the playing field with more experienced, older peers.
That's not even getting into the AI entrepreneurs who think they're going to use AI to start a company or find a winning strategy to trade memecoins or bet on PolyMarket so they don't have to get a job at all.
I think the next generation is all-in on AI for their own use. They see it as their advantage over the boomers occupying all the good jobs. They think ChatGPT is their cheat code for getting into these companies and taking those jobs.
That will happen inevitably, we are throwing spaghetti at the wall right now, and cleaning up the mess, lessons will be learned. The question is whether that phase will lead to real lasting damage and to what. For myself I no longer read cold emails, I believe they are all AI generated, and that communication method may legitimately die culturally. What else will be destroyed?
Many things will change, because many things are currently useless in the world right now, literally most jobs in a way shouldn't even exist. You think a guy behind the mcdo counter should exist? It shouldn't, that just an engineering "mistake" as it can already be solved, the world is just slow to catch-up, but it's not only AI, that's just automation. We banked for decades on jobs that virtually shouldn't exist except for the sole purpose of creating jobs, it's like a giant ponzi scheme literally and it will all catch-up at some point.
I think Society will completely reshape itself over the next decades, likely with UBI and other form of social help and the ones that don't want to partake into the whole "AI orchestration" will just not have any opportunity imo, sad, but this is the way I see it. I truly believe it because myself and ALL the people I know have pseudo-replaced their work with solely orchestrating AI, including very complex jobs and lately because some of my friends asked me, I've also built "agents" that replaced entirely their work, and their employer don't even know about it (customer management, remote) which proves that those jobs shouldn't even exist as they are ALREADY replaceable, all Zoom meetings are immediately recorded, agents do basic loop adversarial with all common models, then proceed with doing tasks and so-on, that last for about 30min and the whole week of work is done, all chats are directly sent to a triage agent as well then the whole rag thing and so on.
My work went from managing/developing 1 repo to 70 repo at once, evening to morning answering questions like a bot 10h a day with 8 monitors in front of my face, and I'm realistic, I know at some point I can literally replace my own self with an AI as well to answer for me, it's just a matter of time.
We need to rethink everything and the whole AI hate from the youth will not change anything about it.
I have multiple friends also running pretty large businesses with 30 or more staff, and right now they are literally at a point where they argue about why they shouldn't fire most of them, it's fuckin sad, but it's the reality.
Many countries have a form of UBI, although it's not guaranteed as the meaning of UBI would in a sense, but look at France with their RSA as an example, if you have no incomes/low incomes, you are entitled to it.
RSA is not UBI, UBI literally means Universal Basic Income, it's not for no income/low income people, it's universal.
You are conflating the concept of UBI with social welfare, they are different things and it's a bit annoying to see the erosion of the UBI concept into social welfare, I've noticed an uptick of this the past year or so, no idea where it's originating from...
Agreed I butchered it, but what is the concrete difference right now for someone that has no job (so where UBI is relevant) with social welfare and "UBI" if in the end, that person gets a monthly income that is somehow guaranteed?
The concrete difference is that the society around the person living in a UBI-society will be very different than one where there's only social welfare.
I don't have enough expertise in this field but I don't think we should be thinking only with a doomsday scenario, humans are quite resilient and innovative, society will completely change and I genuinely believe we will find ways, there will be a lot of suffering in between (and maybe after as well, as there is now) but we might eventually reach a point in automation where a lot of prices drop to the point where it's virtually free, food could be included, if we do have 24/7 machines that can build, expand, deliver and so-on with free energy somehow, it's not crazy to think that a KG of chicken could worth 10 times less than it is now, so many things could be reconsidered.
UBI could mean also that people could be living in places further away from main cities, and eventually housing will be automatically built as well so costs could drop sharply.
Of course humans will adapt, the core issue is how we can avoid as much suffering as possible while these changes happen, that's always the point. No one wants to live a life during a transitional period in history where suffering is increased, as a species we should be working to alleviate that.
What's the point of progress if we keep repeating the same mistakes of leaving miserable people behind? Is that progress or just a repetition of the cycle with new shiny things?
That's the only statement that's true. Admiting to AI use is unfashionable in the western world at this time.
But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.
Technology doesn't make anything banal or a hellscape, or fire people. Technology is a lever.
If humans use AI to produce worse output because they are too lazy to bother reviewing and iterating on it, that is a human problem. If humans are going to use AI to help them exploit other humans more efficiently, that is also caused by the human rather than the technology.
Also, the ChatGPT moment for humanoid robots is coming this year or next. It will become very obvious that AI use in these robots is not just superficially plausible text.
> But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.
This is like saying a smoker can't criticize the tobacco industry. It's entirely possible to recognize that AI in school is a huge problem while (hypothetically, in this case) still using it. Indeed, if enough of your peers are using it and you do not, you are effectively being punished for being virtuous. It's a lot like being the one cyclist in the Tour de France who isn't doping.
Similarly, if your peers aren't able to keep a conversation going in a seminar because they had AI do their reading and assignments for them, then you, as a student, are having your education stolen from you in a very real way. Education is something that happens in community. When enough of your community is using AI, your education will suffer.
Again that is a problem with the group of people and how they use technology rather than the technology itself.
I will die on this hill: AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.
> AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.
This is a fine thing to wish for. But literally every AI company today wants their customers to use AI as much as possible.
I, too, would like to live in a world where AI is only _properly_ integrated into education. But that is impossible without limiting its improper integration. An no AI company wants any limits on AI.
> a very real possibility that AI proponents completely lose the next generation of adults
I doubt it. AI seems fundamentally useful. If the guys at the top can’t get their shit together with messaging and strategy, and it increasingly looks like they can’t, they’ll be replaced before an entire generation is potentially rendered permanently uncompetitive. (And to be clear, there is no rush to adopt.)
> We desperately need some leadership in companies or institutions that can place this technology in its proper context
We need the public debate to stop being set by Altman, Musk et al. We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.
What are the ways potential futures with AI, on the spectrum from the familiar sci-fi AGI to more-subtle forms, could work? What are the novel ways it might not? How does capitalism need to evolve? Electoral democracy? Labour organization? If I think to the last few years of television and movies, Westworld is the only one to have contributed anything original to the discourse since Isaac Asimov’s era of science fiction.
> We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.
They're out there, but the artists are roundly anti-AI; if you want their input, you have to listen to what they're saying, rather than pretending that dissenting voices are uninformed.
I don't really think we should talk about it with "use cases" anymore when it can virtually replace/enhance literally almost any form of white collar work and soon physical labor as well (people will act surprised the moment it comes of course, the same as with LLMs despite all the researchs made prior, if theory supports it = it will be), of course humanoids will be in every homes and they'll cost the same as a phone, soon enough, and we will also not be able to live without.
We don't talk about human intelligence with "use cases", I think we need to be realistic about what AI will be in our lives, most people already can't do without, and this will without doubt expand further.
> would 100% expect a commencement speaker to be hyping me up
That’s what this speaker was trying to do. The problem is it was stupid and dishonest. It could have been done properly. But none of that will rise to the level of a roadmap. If you’re looking for a roadmap at commencement, you were failed at multiple steps before.
Unemployment rampant. All production remains in the hands of a few. All power (tokens) remains in the hands of a few. Goods are cheaper but no one can buy them. Path to the upper class now guarded closely by tokens, potential avenues for entrepreneurs diminish rapidly. Own an AI or compute, get someone to give you tokens, or live in poverty.
Distribution of abundance in current time is close to evil, America reducing entitlements and support (not expanding). Rampant waste. No reason to think any of this will change.
> Cost of goods and services drops by orders of magnitude at every point in the supply chain.
That sounds great, but how are LLMs supposed to achieve this? You can't just say "AI will make a utopia". You have to present a vision for how it will get us there.
I'm tired of hearing about how AI will solve all the worlds problems. I want to see actual progress towards achieving these goals. And for the most part that hasn't manifested. Most people would consider AI to have had a net negative impact on their lives.
That's quite an unsubstantiated leap. The world has gone through plenty of digital transformations and the number of people in poverty has only _shrank_.
It's hard not to make that leap when so many layoffs are (according to PR releases anyway) attributed to AI adoption. Even if the reality on the ground is that many of these workforce reductions are to make the balance sheets look better (presumably as a bet on AI), it's impossible to ignore the accelerating wealth gap, especially in the context of the gutting of regulations and state actors leveraging world events on prediction markets. We will not be given a fair deal if we simply wait for our benefactors to provide one.
The number of people in absolute poverty has shrunk, but the proportion of national income held by the wealthy has increased, so economic mobility is declining. There are many reasons for this, but typically deployment of technology is a capital expense and employers aim to realize all the gains from their investment, notwithstanding the upskilling and/or deskilling effect it has on workers, who are treated as fungible economic units rather than people. Nobody likes this except capitalists.
In particular, CS students are feeling it more than most majors. (Especially compared with the shock that most of them probably thought CS was the field for job security.)
Saw an article recently that said CS majors were up there with performing arts majors and art history majors in terms of unemployment rate.
Yes, but during those transformations, the CEOs of the companies selling the products involved weren't actively and aggressively marketing them as being able to replace all the humans they employ.
You can't have it both ways: either LLMs are an amazing, revolutionary technology that can replace many human jobs in unprecedented ways, or it's going to be a mild transition that really only helps people.
> the CEOs of the companies selling the products involved weren't actively and aggressively marketing them as being able to replace all the humans they employ
The assembly line was explicitly about replacing skilled with relatively unskilled labor.
It isn't the first time a new technology has been pitched to replace many worker's jobs, both successful and unsuccessful versions of the promise have come to pass several times.
I think what they are saying is "that something can replace a job does not inherently imply the next step is poverty". From that perspective, you can absolutely have it both (and many other combinations of) ways.
That was exactly what a great many things were marketed as, such as the jacquard loom and dynamite.
What actually happened in each case was that employment went up for a good long while, as the efficiency boost to the sectors touched made investment far more viable. Eventually successive rounds of automation did reduce employment in each of weaving and mining, but it wasn’t an overnight catastrophe as initially advertised or feared.
Do we want to be distracted by sewing shirts and writing Python scripts when the hardware can do the math for us?
Programmers (and other workers but this a tech centric forum) need to start to accept that programming was a necessary evil of the before times. We didn't have the theories. We didn't have the manufacturing techniques.
Before hardware was powerful enough to run models on a laptop we needed all that hand crafted custom state management to avoid immediate resource exhaustion. Or to hide the deficiencies of the chips of the day.
For all the appeals of tech workers to a lean into a high tech life, programming as humans did in the before times seems pretty outdated. Bring back rotary phones too, I guess.
If we don't have jobs we are free to:
Take up arms against an exploitative political and owner class minority.
Make sure grandma and the kids are ok. Everyone has enough to eat?
Free the sweatshop kids we exploit without giving them a choice of "the mines" or college, from obligations to our own meat suits
???? What else?
Whole lot of job culture too was just busy work to satisfy the beliefs of they who are generationally churning out of life. Bye grandpa; thanks for zero assurances but tons of obligations; you won't be missed!
Elon and such are not an immutable constant of the universe. Few more years and he'll be Mitch McConnelling out on TV. Especially with all the drug abuse.
Everyone under 50 needs to prepare for the future not LARP the past.
I am meeting with my state legislators this week to, among other things, discuss how big tech should be on the same hook as the food industry who have to label their products in the open.
How all the auto standards are openly legislated, AI standards should be as well. It's just electrical physics not magic.
How like the government has to release laws, big tech should have to release all code, guiding theoretical principles, training and development environments and attest that is what they loaded on those servers.
Use their tools against them; they have the government in their cornering giving them handouts. Go get yours.
You all came up in a society that afforded zero assurances this whole time. Rather than idle about jerking off the American ego perhaps you should have listened to everyone saying this was coming a decade ago. Two decades ago. 4 decades ago.
I have zero respect for my fellow Americans. Willfully ignorant and willingly exploited serfs. Forget I said anything; you all didn't do the political action work to put me on the hook for your healthcare so thoughts and prayers, HNers.
> You want people to stick to social norms, call it both ways
Oh, I downvoted both of you. But I only flagged you because of the name calling, which is against the guidelines [1]. When I flag I like to give the person on the other side a note, in case they genuinely didn’t know.
At this point money is essentially a social construct. None of the billionaires have a Scrooge McDuck vault full of gold coins.
Think ST:TNG; automation makes enough stuff. Why worry about money?
So focus on political action then; log off this VC funded freebie intended to ameliorate your feelings about the rich owners and operators of this site, and do like they do; tell government to make things right by you or we replace government.
You think PG is sitting on the sidelines letting Congress figure out themselves? He's putting his thumb on the scale through his actions through social networking with politicians.
Gotta leave the basement and do the work
Americans are heavily propagandized and naive af. So exhausted by educated morons.
The funny thing is that it's not even true. People invested in AI just glee at the thought of common men in abject poverty, so this is the marketing that stuck.
Shows you don't need to have red skin and horns to delight in the suffering of starving people.
the same people who have been using the AI to write their papers, etc.. while supposedly "not liking it". Classic hypocrisy. You can't have it both ways.
College graduates being that myopic and failing at such basic logic. One can only wonder about the quality of education they've got and how it would help them in the modern technological world. Though being that hypocritical may be they would exactly do very well.
>University of Central Florida’s College of Arts and Humanities and Nicholson School of Communication and Media
Governments and companies are so bad at selling AI to the population.The reason is the elephant in the room. Everyone or at least most understand the consequences. You can't sit and be amazed at the speed an AI does tasks and not be able to extrapolate what that would mean for jobs.
They are both right, the revolution needs to be oriented for ordinary people and college kids to benefit from it or else their attitude is wholly justified. There's basically no reason for them to cheer on a future of trillion dollar corporations using AI services to battle for knowledge work market share.
My first day of orientation at the CS dept was at the height of the dot com crash. I think I got told by 20+ seniors that day to drop out before paying a single bill. That it was all pointless and the internet was an over valued bubble and no one was getting hired. Mood on campus was scary for almost two years post the crash. If we had social media back then I can only imagine how much more fears would have been amplified.
In the past, "labor saving technology" has always spawned alternate jobs that people could take with some retraining. This time it might be truly different. If one day AI can actually do all knowledge work, there might not be anything left for former knowledge workers to do. There's no physical law that says new technology necessarily produces 1:1 new, different jobs.
> In the past, "labor saving technology" has always spawned alternate jobs that people could take with some retraining.
Labor saving technology does not create enough alternative jobs to employ all those that it displaced, otherwise it wouldn't be labor saving.
Instead, the surplus created by these technologies allows that society to deploy labor on less immediately necessary jobs. These jobs weren't created by the technology, they were always there, but society did not have the resources to staff them (think education, research, academia, merchants, etc.)
This dynamic has been true since pre-historic times, so you'll need some extraordinary evidence if you want us to believe this time is different.
Many people who pointed out the Industrial Revolution becomes the basis of modern quality of life skip what happened in between the 17xx-18xx until today.
Things like Unions, Wars, etc.
What comes after new technology has always been the elite class owning them all and forcing everybody else to suffer until something managed the distribution of resources slightly better (War forces that).
The Luddites were mad not because the machines put them out of work but because the machines were supremely shitty. The machines were dangerous and they made lousy products that reflected a lack of pride in workmanship.
The Luddites were all for saving labor, but not if enshittified products and slavery to unreliable machines were the price.
Many Luddites were protesting labor conditions. At the time the majority of labor laws were being written by the capital class with the help of political leaders and the constabulary. Common complaints were working hours, child labor, safety, wages, and protection from furlough. There were some who protested the quality of the product the machines created... but I would say those are the minority.
Destroying the machines was a way to gain leverage for a class of people who had none. People had been using looms for centuries. It wasn't the technology that was the problem... that's what the victors, the capitalists, have written was the reason.
Right, read the room. Tell them that "there are challenges ahead, but their excellent education and optimism will overcome even the most ominous obstacles, technological or otherwise."
> their excellent education and optimism will overcome even the most ominous obstacles, technological or otherwise
Or, alternatively, that we need the humanities today in a fundamental, possibly existential, way. If AI is another Industrial Revolution, rise to be our Sinclairs, Dickens and Tolstoys.
It's interesting that I'm only seeing this kind of anti-ai tendency only in American/Western art circles. Anywhere else in Middle East/Asia, artists are having fun experimenting with it.
Anyone can pick up a pencil and practice for hours a day! You can look out a window for inspiration! There is no "gatekeeping" art, only people upset it doesn't come as easily to them as B2B SAAS and confusing real effort and introspection as "gatekeeping".
The AI art people were so happy to rub it in artists face, that finally, without effort or appreciation, they no longer had to pay the skilled person for an image.
Timestamp 1:20:50 is about where the clown show starts. Totally out of touch. Her nervous giggling and throwing her hands up when she realizes the audience doesn't think AI is the greatest invention since sliced bread.... Wow.
That is nauseating to watch, she is an abysmal public speaker, arrogant, extremely uninspiring, and generally very out of touch. I would feel this way if she was reading a review of cat in the hat. Letting this woman speak about AI, what a disaster.
AI has been the “next industrial revolution” since the 70’s and 80’s. We’ll have a few more RoboCop movies and then things will be as they always are after hype cycles
Rightfully so. Unfettered capitalism will only end with a bunch of rich people producing and selling the means of living to the rest of us at just the right markup to keep their feet on our throats. Organized labor needs a resurgence in a big way.
I suspect for CS it would have been outright food riots. The humanities are probably the best insulated from AI as the uncanny valley is really obvious in AI literature and art. CS is the final stage in the “programming myself out of a job” meme which is quite depressing if you’re just getting your first job (or, more likely at the moment, not.)
Owe is an interesting choice of a word. Don't get me wrong, I personally am of the opinion that, by default, most schools for most programs, the related body of works can be accomplished by a warm body ( some of it based on personal anecdotes -- in US mind you ). There are exceptions and those include some non-humanities and, well, people who are curious ( but that was always true for them ).
Still, just because a technology facilitates something does not make their distaste any less potent. If anything, they recognize how much of world's building blocks are a fancy facade ( mild alliteration intended ).
Perhaps, owe was a poor word to use too. I will admit that, however I did not think that would be a point of focus in my comment at the time.
> in US mind you
That is my only reference.
> Still, just because a technology facilitates something does not make their distaste any less potent.
Sure, I agree once again. I may have not explained my position well initially. I just cannot help but feel it's a little hypocritical. And again, hypocritical might be a poor word to use.
We have kids booing a commencement speaker after her AI comment (which I think was a distasteful comment), but at UCLA's graduation a few days ago, we had this: https://www.youtube.com/shorts/zSqOPOzrIig
I think why I am having difficulty describing what I am thinking is because there is not one homogeneous group of students. There is clearly a subset of students that oppose AI's current and future costs/benefits. Though, at the same time, there is a different subset of students that heavily rely on AI. Some to even a problematic degree.
I have a few friends that are professors at a prestigious, private university in my city. They have all shared their little tricks in how they are trying to combat AI usage in academics. Some put hidden white text in the margins of their assignments. When citations are submitted with work, they look for the the 'utm?=chatgpt' in the urls. Some of the foreign language professors craft writing prompts with words that they know LLMs often tend to translate incorrectly.
Based on the research I can find via a few quick searches, it appears that in the populations of the studies, AI usage is far more common than AI abstinence. I imagine these students want to use AI to benefit themselves but not harm themselves in the future. I do not fault them for that in the slightest, but I do not think that is how things are going to end up working out. I strongly believe the students that misuse AI to do their work for them -- not help them -- will be in for a rude awaking.
As I am reading the source, it is more weird than I initially accounted for. The speech she gave was fairly benign compared to some of the bigger quotables from Musk, Altman or other AI industiry figures. Basically, march of time and 'I remember when' kinda nostalgia.
But given how weirdly benign the speech was, I have to ask. Why the boos? Is there some context I am missing? Was the speaker recently on the wrong side of history?
I am asking half-jokingly, but it seems like there is a giant part that is missing somewhere and I have no reasonable way of explaining it.
I am lost as well. I am more confused about why she was even talking about AI at all in the speech. She could have just hit the high-notes and used the same cliches as many other speeches.
I feel mentioning AI in a commencement speech would be like me stating something in a graduation speech like, "Congratulations, class of 2026. The Carolina Hurricane have swept their opponents in both rounds of the NHL Stanley Cup playoffs. May your future be as bright as theirs."
No telling though. I am completely unaware of who the speaker even is.
Most people in uni have compulsory humanities courses, so I imagine it's not too hard for them to attribute actions by moneyed interests to boost AI to the furtherance of capital, surveillance, and a widening of the economic gap. The fact remains, though, most of these degrees (with the obvious exception of those specific to current AI/LLM technologies) could have been attained without AI before.
Sure, I do not disagree in the slightest. However, I think degrees, while optimistically serving as a certification of a certain level of understanding/knowledge, also provide a sort of social signal. However, Goodhart's Law is still in full effect, so that does complicate matters a bit.
> Meta and Jeff Bezos being held up in a good light
The message to a group of graduating artists should have been about the literature, art and public works that turned the Industrial Revolution's hyper-concentrated gains into broadly-felt benefits. (And then, after WWII and the Green Revolution, encouraged us to start reckoning with its environmental cost.)
AI is potentially—and with increasing confidence day to day—showing itself to be useful. That deserves neither worship nor demonization. Yet history—told by the humanities!—tell us, it probably hasn’t started in the right leaders’ hands. It is the role of the humanities to show and guide the public through that debate and reconciliation.
I mean, duh. Do we really think someone with the title of "vice president of strategic alliances at Tavistock Group" lives in the same universe as the rest of us? In her alternative universe, Zucc and Bezos are heroes to look up to. These people have no actual interaction with the rest of us, and just assume their world view is universally held.
Look how genuinely surprised she was by the audience's reaction. In their world, AI is an unambiguous good.
There are real use cases for this technology! But the idea that the generation of superficially plausible text is "the next Industrial Revolution" comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees. We desperately need some leadership in companies or institutions that can place this technology in its proper context, and leverage it without getting manic about it.
I proposed once a while back that we should have the HN admins strip all integer counts for a week server-side, to see if the site quality improved or worsened during that time. The mods suggested I ask HN, so I did. HN loathed the idea of it, for every possible reason except this one: removing all those integers would be like quitting gambling cold turkey after years of pulling the vote lever every day. I’m not much less vulnerable to this than everyone else, but I still want to see it happen someday. I remain reasonably confident that our social media site’s quality would skyrocket after a couple days of our posts and comments being disinfected of make-integer-go-up jackpots.
There's the classic "I wish facebook had a dislike button" or the equivalent for twitter.
But in the thread-based forum context, removing the downvote has interesting effects. For one, it stops people who down-vote-brigade to lower visibility. It also stops the "I don't like that guy" engagement and works on a more positive "I appreciated this comment" mode.
It's not one-size fits all but I've seen positive effects on more marginalized forums.
So much of social media nowadays is just low quality clips of TV shows/movies with an AI-generated song over them. Or the same Minecraft parkour map as an AI voice recites an r/AmITheAsshole post. Or AI-generated funny videos. The quality of the content doesn't matter at all.
Anyone I've talked to about how it was all just AI just responds with something akin to "I don't care if it's AI, it's funny! Let people enjoy things!"
So, now people are in groups and chats full of bots posting exactly what they want to hear.
Instead of meta b it's states, companies, or individuals hoping to make money from their followers
Like don't we want people running these companies to be honest to the public rather than misdirection?
[1]. https://www.platformer.news/sam-altman-ai-backlash/
Ironically, this makes even less sense.
If (ostensibly) the goal of developing LLMs was so we can all create more while working less, but he also assures us there will be just as much work in the future, then what was the point of this tech in the first place?
> This is a good instinct: one of the virtues of democracy is the way that it gives people a feeling of control over their own lives. People who believe that they can rein in AI companies through votes and laws and regulations will be much less likely to turn to violence.
I like how this is entirely put in terms of "feelings" and "beliefs" with the ultimate goal being to keep people from resorting to violence. It doesn't seem to play any role how much control people actually have.
What about any of these folks’ biographies hints that they’re capable of being honest?
> We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. [1]
[1]. https://www.anthropic.com/news/core-views-on-ai-safety
To conceptualize AI as merely “superficially plausible text” would be like writing off a Watt steam engine in 1776. The current AI bubble might be early, but it won’t be wrong. The fervor with which corporations are exploring the space stems not from misplaced optimism but an existential threat. Right now every industry is vulnerable to disruption on a massive scale.
And we’re still in the early stages. Frontier models like Claude or GPT-5.5 are still just tuning 2017’s “Attention is All You Need” with MoE, RLHF, and more compute. We are roughly where online services were in the early 90s, when Prodigy and CompuServe were battling it out for market share before the open web swept them aside.
We are still waiting for the modern equivalents of Yahoo, Google, Amazon, and Facebook, never mind the lessers. As Tim Berners-Lee said of the web: “we have not seen it yet. The future is still so much bigger than the past.”
We’re past plausible text since GPT-2 and it’s undeniable that the technology is making waves right now and is having an impact.
As you can’t judge the impact of the Industrial Revolution by the first steam engines, you can’t dismiss the impact the technology is having right now.
There was recently an article shared around here that an LLM diagnosed ER patients more accurately than doctors.
Looking beyond LLMs image analysis to detect cancer and other diseases.
Like in coding, AI can and should be a useful tool for the human who decides and is ultimately responsible.
AI-made music is frankly pretty good, do you actually listen to it?
I don't mean music that has AI-generated stems as part of an arrangement, where a human actually created it and used AI for bits and pieces, I don't see absolutely any point on listening to purely AI-generated music. The fundamental essence of music is emotion, listening to something generated without emotion has no point, it might sound good but it's hollow and devoid of meaning.
I've tried to listen to it, it doesn't even make me "sad", it makes me feel... Nothing. I'm a hobby musician and I incorporated some AI-generated parts in some tracks where I mangled/processed them but my idea was exactly to express how hollow AI-generated music is without the human aspect.
I think this is more of a musician side which I respect, but a lot of people would simply not care who created it (or what).
What you are describing is more akin to a form of hollow entertainment through the medium of music, a lot of pop music can also fall into that category (no, not all, there is also a lot of artistry is many pop artists/songs).
If AI-generated music triggers emotions on you then keep consuming it but knowing that it's a hollow form of the art, there's no one on the other side communicating with you, it's basically like having a conversation with a chatbot, it might sound human but you know that there's no one on the other side listening to you. AI music is the other way: there's no one on the other side telling you a story, or a feeling they went through, it's just a mimesis of it.
It's pretty silly that so many people take as an axiom that the human brain basically has a monopoly on certain patterns of electrical signals, and have semi-religious beliefs that this will always be the case.
It's that experts in a field generally agree that what comes out is insidiously hollow garbage.
This isn't a "semi-religious" belief. It's linear token soup and diffusion bakes running headfirst into actual expertise, second and third order effects, refined skill and taste, and so on.
If you actually want to see civilization advance, you cannot rely on machines that merely mash up existing intellectual output while pretending to have expertise.
We already had that in the form of art school avant-gardism. AI is just style transfer of that, with corporate sycophancy and valley hyperbole as a veneer.
It doesn't matter how technically innovative, or how much expertise, a model has, while an AI is not a consciousness that can express itself it will be hollow. There's no way around that.
If some form of AI becomes conscious, and can express itself through whatever art form it conjures for that, why would it even use music? Music is human, it's tuned to how our brains work and perceive sounds, I'd be much more interested to discover what art forms another form of consciousness that we can commuicate with can come up on its own.
The brain perceiving sounds a certain way in the end is just data, that can be mapped as well, an AI can make us laugh right because it understands speech really well (and will be a thousand time better someday), what's the actual difference with music?
Let me give you another example, there is some Meme about older folks getting bamboozled by AI images right (especially doomsday stuff) which proves that it does trigger them genuine emotions, what's the difference if that image does actually exist or not (or let say a human photographed it).
The AI as it is today isn't really doing any of those things. At most, it's a sort of reliable replacement fot Google Search. Worden ehen, it's being presented as threat to all those things the people care about.
I'm going to say up front that I'm not as familiar with this period of history as I should be, but -- would it be totally unfair to say the same of the "Industrial Revolution"?
I'm not gonna say they're equivalent by any means, but my understanding is the "Industrial Revolution" was hellish for many people. Maybe the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?
They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.
The Industrial Revolution was good. But it also required erecting the modern administrative state to manage. People had to soberly measure the problems, weigh the benefits and risks, and then invent new institutions and ways of thinking to accommodate the new world.
That happened in the Second Industrial Revolution. The First Industrial Revolution was much less comfortable for both workers (who were given much worse working conditions) and the aristocracy (whose landholdings were much less valuable) - it was the middle class who benefited.
> The Industrial Revolution was good.
The outcomes of the Industrial Revolutions were good. The experience of living through those revolutions was mixed.
Infant deaths decreased for a while (and NOT because of the industrial revolution):
> These patterns are better explained by changes in breastfeeding practices and the prevalence or virulence of particular pathogens than by changes in sanitary conditions or poverty[1]
then rose:
>Mortality at ages 1-4 years demonstrated a more complex pattern, falling between 1750 and 1830 before rising abruptly in the mid-nineteenth century.
[1] Davenport, Romola J. (2021). "Mortality, migration and epidemiological change in English cities, 1600–1870." International Journal of Paleopathology, 34, 37–49. PMC7611108.
Maybe AI enables great inventions in a decade, but for now the only appeal is that multinational corporations get to fire workers and everything's filled with slop. Of course they're not happy.
The college-age students I interact with hate AI content from other people, but they love using AI for their own work.
They'll pump AI generated memes and AI altered images all day long. Then they'll use ChatGPT to do their homework and write their resume, then look for an AI tool that will spam apply to jobs for them. Then when they get the job they plan to use ChatGPT to level the playing field with more experienced, older peers.
That's not even getting into the AI entrepreneurs who think they're going to use AI to start a company or find a winning strategy to trade memecoins or bet on PolyMarket so they don't have to get a job at all.
I think the next generation is all-in on AI for their own use. They see it as their advantage over the boomers occupying all the good jobs. They think ChatGPT is their cheat code for getting into these companies and taking those jobs.
I think Society will completely reshape itself over the next decades, likely with UBI and other form of social help and the ones that don't want to partake into the whole "AI orchestration" will just not have any opportunity imo, sad, but this is the way I see it. I truly believe it because myself and ALL the people I know have pseudo-replaced their work with solely orchestrating AI, including very complex jobs and lately because some of my friends asked me, I've also built "agents" that replaced entirely their work, and their employer don't even know about it (customer management, remote) which proves that those jobs shouldn't even exist as they are ALREADY replaceable, all Zoom meetings are immediately recorded, agents do basic loop adversarial with all common models, then proceed with doing tasks and so-on, that last for about 30min and the whole week of work is done, all chats are directly sent to a triage agent as well then the whole rag thing and so on.
My work went from managing/developing 1 repo to 70 repo at once, evening to morning answering questions like a bot 10h a day with 8 monitors in front of my face, and I'm realistic, I know at some point I can literally replace my own self with an AI as well to answer for me, it's just a matter of time.
We need to rethink everything and the whole AI hate from the youth will not change anything about it.
I have multiple friends also running pretty large businesses with 30 or more staff, and right now they are literally at a point where they argue about why they shouldn't fire most of them, it's fuckin sad, but it's the reality.
You are conflating the concept of UBI with social welfare, they are different things and it's a bit annoying to see the erosion of the UBI concept into social welfare, I've noticed an uptick of this the past year or so, no idea where it's originating from...
UBI could mean also that people could be living in places further away from main cities, and eventually housing will be automatically built as well so costs could drop sharply.
What's the point of progress if we keep repeating the same mistakes of leaving miserable people behind? Is that progress or just a repetition of the cycle with new shiny things?
It took two world wars till we had an aberrational period where the middle class actually had lives which were good.
UBI can’t happen because governments globally don't have the money to pay for it. Its good to hope, but the details aren’t in favor.
We'll have no UBI and little purpose.
That's the only statement that's true. Admiting to AI use is unfashionable in the western world at this time.
But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.
Technology doesn't make anything banal or a hellscape, or fire people. Technology is a lever.
If humans use AI to produce worse output because they are too lazy to bother reviewing and iterating on it, that is a human problem. If humans are going to use AI to help them exploit other humans more efficiently, that is also caused by the human rather than the technology.
Also, the ChatGPT moment for humanoid robots is coming this year or next. It will become very obvious that AI use in these robots is not just superficially plausible text.
This is like saying a smoker can't criticize the tobacco industry. It's entirely possible to recognize that AI in school is a huge problem while (hypothetically, in this case) still using it. Indeed, if enough of your peers are using it and you do not, you are effectively being punished for being virtuous. It's a lot like being the one cyclist in the Tour de France who isn't doping.
Similarly, if your peers aren't able to keep a conversation going in a seminar because they had AI do their reading and assignments for them, then you, as a student, are having your education stolen from you in a very real way. Education is something that happens in community. When enough of your community is using AI, your education will suffer.
I will die on this hill: AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.
This is a fine thing to wish for. But literally every AI company today wants their customers to use AI as much as possible.
I, too, would like to live in a world where AI is only _properly_ integrated into education. But that is impossible without limiting its improper integration. An no AI company wants any limits on AI.
I doubt it. AI seems fundamentally useful. If the guys at the top can’t get their shit together with messaging and strategy, and it increasingly looks like they can’t, they’ll be replaced before an entire generation is potentially rendered permanently uncompetitive. (And to be clear, there is no rush to adopt.)
> We desperately need some leadership in companies or institutions that can place this technology in its proper context
We need the public debate to stop being set by Altman, Musk et al. We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.
What are the ways potential futures with AI, on the spectrum from the familiar sci-fi AGI to more-subtle forms, could work? What are the novel ways it might not? How does capitalism need to evolve? Electoral democracy? Labour organization? If I think to the last few years of television and movies, Westworld is the only one to have contributed anything original to the discourse since Isaac Asimov’s era of science fiction.
They're out there, but the artists are roundly anti-AI; if you want their input, you have to listen to what they're saying, rather than pretending that dissenting voices are uninformed.
We don't talk about human intelligence with "use cases", I think we need to be realistic about what AI will be in our lives, most people already can't do without, and this will without doubt expand further.
To be fair, this isn’t the commencement speaker’s job.
I would 100% expect a commencement speaker to be hyping me up for what comes next.
That’s what this speaker was trying to do. The problem is it was stupid and dishonest. It could have been done properly. But none of that will rise to the level of a roadmap. If you’re looking for a roadmap at commencement, you were failed at multiple steps before.
That being said we already have relative superabundance and we're more miserable than ever, so it's not clear that more of it will cheer us up.
Distribution of abundance in current time is close to evil, America reducing entitlements and support (not expanding). Rampant waste. No reason to think any of this will change.
It's not great that we can buy iphones (and AI is going to cause all electronics to be scarce, so much for abundance there)
That sounds great, but how are LLMs supposed to achieve this? You can't just say "AI will make a utopia". You have to present a vision for how it will get us there.
I'm tired of hearing about how AI will solve all the worlds problems. I want to see actual progress towards achieving these goals. And for the most part that hasn't manifested. Most people would consider AI to have had a net negative impact on their lives.
Saw an article recently that said CS majors were up there with performing arts majors and art history majors in terms of unemployment rate.
You can't have it both ways: either LLMs are an amazing, revolutionary technology that can replace many human jobs in unprecedented ways, or it's going to be a mild transition that really only helps people.
The assembly line was explicitly about replacing skilled with relatively unskilled labor.
I think what they are saying is "that something can replace a job does not inherently imply the next step is poverty". From that perspective, you can absolutely have it both (and many other combinations of) ways.
What actually happened in each case was that employment went up for a good long while, as the efficiency boost to the sectors touched made investment far more viable. Eventually successive rounds of automation did reduce employment in each of weaving and mining, but it wasn’t an overnight catastrophe as initially advertised or feared.
Programmers (and other workers but this a tech centric forum) need to start to accept that programming was a necessary evil of the before times. We didn't have the theories. We didn't have the manufacturing techniques.
Before hardware was powerful enough to run models on a laptop we needed all that hand crafted custom state management to avoid immediate resource exhaustion. Or to hide the deficiencies of the chips of the day.
For all the appeals of tech workers to a lean into a high tech life, programming as humans did in the before times seems pretty outdated. Bring back rotary phones too, I guess.
If we don't have jobs we are free to:
Take up arms against an exploitative political and owner class minority.
Make sure grandma and the kids are ok. Everyone has enough to eat?
Free the sweatshop kids we exploit without giving them a choice of "the mines" or college, from obligations to our own meat suits
???? What else?
Whole lot of job culture too was just busy work to satisfy the beliefs of they who are generationally churning out of life. Bye grandpa; thanks for zero assurances but tons of obligations; you won't be missed!
Elon and such are not an immutable constant of the universe. Few more years and he'll be Mitch McConnelling out on TV. Especially with all the drug abuse.
Everyone under 50 needs to prepare for the future not LARP the past.
How are we not going to be begging whoever controls chip fabs and electrical plants for compute tokens? HOW!? EXPLAIN IT.
I am meeting with my state legislators this week to, among other things, discuss how big tech should be on the same hook as the food industry who have to label their products in the open.
How all the auto standards are openly legislated, AI standards should be as well. It's just electrical physics not magic.
How like the government has to release laws, big tech should have to release all code, guiding theoretical principles, training and development environments and attest that is what they loaded on those servers.
Use their tools against them; they have the government in their cornering giving them handouts. Go get yours.
You all came up in a society that afforded zero assurances this whole time. Rather than idle about jerking off the American ego perhaps you should have listened to everyone saying this was coming a decade ago. Two decades ago. 4 decades ago.
I have zero respect for my fellow Americans. Willfully ignorant and willingly exploited serfs. Forget I said anything; you all didn't do the political action work to put me on the hook for your healthcare so thoughts and prayers, HNers.
Ah so your answer is AI will cause most people to live in abject poverty. Good talk.
Please don’t do this.
What is this? The NBA? You want people to stick to social norms, call it both ways.
Oh, I downvoted both of you. But I only flagged you because of the name calling, which is against the guidelines [1]. When I flag I like to give the person on the other side a note, in case they genuinely didn’t know.
[1] https://news.ycombinator.com/newsguidelines.html
ICE has an $80 billion budget.
Demand Congress pay off mortgages rather than hand Leon Skum tens of billions.
There you go. Stability.
[1] https://www.fhfa.gov/data/dashboard/nmdb-outstanding-residen...
Think ST:TNG; automation makes enough stuff. Why worry about money?
So focus on political action then; log off this VC funded freebie intended to ameliorate your feelings about the rich owners and operators of this site, and do like they do; tell government to make things right by you or we replace government.
You think PG is sitting on the sidelines letting Congress figure out themselves? He's putting his thumb on the scale through his actions through social networking with politicians.
Gotta leave the basement and do the work
Americans are heavily propagandized and naive af. So exhausted by educated morons.
Shows you don't need to have red skin and horns to delight in the suffering of starving people.
College graduates being that myopic and failing at such basic logic. One can only wonder about the quality of education they've got and how it would help them in the modern technological world. Though being that hypocritical may be they would exactly do very well.
>University of Central Florida’s College of Arts and Humanities and Nicholson School of Communication and Media
yep, clearly not Stanford.
Yes you can. They use AI and also despise it because it will turn the world into one big caste system. Ones with access to compute, and ones without.
Labor saving technology does not create enough alternative jobs to employ all those that it displaced, otherwise it wouldn't be labor saving.
Instead, the surplus created by these technologies allows that society to deploy labor on less immediately necessary jobs. These jobs weren't created by the technology, they were always there, but society did not have the resources to staff them (think education, research, academia, merchants, etc.)
This dynamic has been true since pre-historic times, so you'll need some extraordinary evidence if you want us to believe this time is different.
Things like Unions, Wars, etc.
What comes after new technology has always been the elite class owning them all and forcing everybody else to suffer until something managed the distribution of resources slightly better (War forces that).
Avoiding a repeat of that would be great while also increasing productivity would be good.
The Luddites were all for saving labor, but not if enshittified products and slavery to unreliable machines were the price.
Sounds pretty familiar to me.
Destroying the machines was a way to gain leverage for a class of people who had none. People had been using looms for centuries. It wasn't the technology that was the problem... that's what the victors, the capitalists, have written was the reason.
Well, yeah.
Or, alternatively, that we need the humanities today in a fundamental, possibly existential, way. If AI is another Industrial Revolution, rise to be our Sinclairs, Dickens and Tolstoys.
Hmm, how would we measure and confirm this hypothesis?
Anyone can pick up a pencil and practice for hours a day! You can look out a window for inspiration! There is no "gatekeeping" art, only people upset it doesn't come as easily to them as B2B SAAS and confusing real effort and introspection as "gatekeeping".
The AI art people were so happy to rub it in artists face, that finally, without effort or appreciation, they no longer had to pay the skilled person for an image.
https://www.youtube.com/watch?v=zwYkHS8jvSE
"Passion--let's go!" Lady read the room.
The More Young People Use AI, the More They Hate It
https://news.ycombinator.com/item?id=47963163
Study found that young adults have grown less hopeful and more angry about AI
https://news.ycombinator.com/item?id=47704443
Somehow I have a feeling that the reaction would have been totally different if it would have been the EECS graduates.
Fear and rejection in certain professions is real and maybe even understandable.
I imagine 25 years ago someone telling music graduates “streaming is the future of music distribution” would have received the same reaction.
However there was a feeling that “the job” is radically changing right now.
https://www.youtube.com/shorts/vAn7DsXWQGE
Still, just because a technology facilitates something does not make their distaste any less potent. If anything, they recognize how much of world's building blocks are a fancy facade ( mild alliteration intended ).
> in US mind you
That is my only reference.
> Still, just because a technology facilitates something does not make their distaste any less potent.
Sure, I agree once again. I may have not explained my position well initially. I just cannot help but feel it's a little hypocritical. And again, hypocritical might be a poor word to use.
We have kids booing a commencement speaker after her AI comment (which I think was a distasteful comment), but at UCLA's graduation a few days ago, we had this: https://www.youtube.com/shorts/zSqOPOzrIig
(Student's explanation: https://www.youtube.com/shorts/rswUgIfj1YU)
I think why I am having difficulty describing what I am thinking is because there is not one homogeneous group of students. There is clearly a subset of students that oppose AI's current and future costs/benefits. Though, at the same time, there is a different subset of students that heavily rely on AI. Some to even a problematic degree.
I have a few friends that are professors at a prestigious, private university in my city. They have all shared their little tricks in how they are trying to combat AI usage in academics. Some put hidden white text in the margins of their assignments. When citations are submitted with work, they look for the the 'utm?=chatgpt' in the urls. Some of the foreign language professors craft writing prompts with words that they know LLMs often tend to translate incorrectly.
Based on the research I can find via a few quick searches, it appears that in the populations of the studies, AI usage is far more common than AI abstinence. I imagine these students want to use AI to benefit themselves but not harm themselves in the future. I do not fault them for that in the slightest, but I do not think that is how things are going to end up working out. I strongly believe the students that misuse AI to do their work for them -- not help them -- will be in for a rude awaking.
As I am reading the source, it is more weird than I initially accounted for. The speech she gave was fairly benign compared to some of the bigger quotables from Musk, Altman or other AI industiry figures. Basically, march of time and 'I remember when' kinda nostalgia.
But given how weirdly benign the speech was, I have to ask. Why the boos? Is there some context I am missing? Was the speaker recently on the wrong side of history?
I am asking half-jokingly, but it seems like there is a giant part that is missing somewhere and I have no reasonable way of explaining it.
I feel mentioning AI in a commencement speech would be like me stating something in a graduation speech like, "Congratulations, class of 2026. The Carolina Hurricane have swept their opponents in both rounds of the NHL Stanley Cup playoffs. May your future be as bright as theirs."
No telling though. I am completely unaware of who the speaker even is.
The message to a group of graduating artists should have been about the literature, art and public works that turned the Industrial Revolution's hyper-concentrated gains into broadly-felt benefits. (And then, after WWII and the Green Revolution, encouraged us to start reckoning with its environmental cost.)
AI is potentially—and with increasing confidence day to day—showing itself to be useful. That deserves neither worship nor demonization. Yet history—told by the humanities!—tell us, it probably hasn’t started in the right leaders’ hands. It is the role of the humanities to show and guide the public through that debate and reconciliation.
Look how genuinely surprised she was by the audience's reaction. In their world, AI is an unambiguous good.
Clearly people don't consider it obvious, considering my comment got flagged.