Animats
I'm tired of LLMs.

Enough billions of dollars have been spent on LLMs that a reasonably good picture of what they can and can't do has emerged. They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time. That last limits their usefulness. They can't safely be in charge of anything important.

If someone doesn't soon figure out how to get a confidence metric out of an LLM, we're headed for another "AI Winter". Although at a much higher level than last time. It will still be a billion dollar industry, but not a trillion dollar one.

At some point, the market for LLM-generated blithering should be saturated. Somebody has to read the stuff. Although you can task another system to summarize and rank it. How much of "AI" is generating content to be read by Google's search engine? This may be a bigger energy drain than Bitcoin mining.

cubefox
I'm not tired, I'm afraid.

First, I'm afraid of technological unemployment.

In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough. But superhuman AI seems now only few years away. It will be our last invention, it will mean total automation. There will be hardly any, if any, jobs left only a human can do.

Many countries will likely move away from a job-based market economy. But technological progress will not stop. The US, owning all the major AI labs, will leave all other societies behind. Except China perhaps. Everyone else in the world will be poor by comparison, even if they will have access to technology we can only dream of today.

Second, I'm afraid of war. An AI arms race between the US and China seems already inevitable. A hot war with superintelligent AI weapons could be disastrous for the whole biosphere.

Finally, I'm afraid that we may forever lose control to superintelligence.

In nature we rarely see less intelligent species controlling more intelligent ones. It is unclear whether we can sufficiently align superintelligence to have only humanity's best interests in mind, like a parent cares for their children. Superintelligent AI might conclude that humans are no more important in the grand scheme of things than bugs are to us.

And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

low_tech_love
The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die. It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it. But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me. This has completely destroyed my interest in reading any new things. I guess I'm lucky that we have produced so much writing in the past century or so and I'll never run out of stuff to read, but it's still depressing, to be honest.

koliber
I am approaching AI with caution. Shiny things don't generally excite me.

Just this week I installed cursor, the AI-assisted VSCode-like IDE. I am working on a side project and decided to give it a try.

I am blown away.

I can describe the feature I want built, and it generates changes and additions that get me 90% there, within 15 or so seconds. I take those changes, and carefully review them, as if I was doing a code review of a super-junior programmer. Sometimes when I don't like the approach it took, I ask it to change the code, and it obliges and returns something closer to my vision.

Finally, once it is implemented, I manually test the new functionality. Afterward, I ask it to generated a set of automated test cases. Again, I review them carefully, both from the perspective of correctness, and suitability. It over-tests on things that don't matter and I throw away a part of the code it generates. What stays behind is on-point.

It has sped up my ability to write software and tests tremendously. Since I know what I want , I can describe it well. It generates code quickly, and I can spend my time revieweing and correcting. I don't need to type as much. It turns my abstract ideas into reasonably decent code in record time.

Another example. I wanted to instrument my app with Posthog events. First, I went through the code and added "# TODO add Posthog event" in all the places I wanted to record events. Next, I asked cursor to add the instrumentation code in those places. With some manual copy-and pasting and lots of small edits, I instrumented a small app in <10 minutes.

We are at the point where AI writes code for us and we can blindly accept it. We are at a point where AI can take care of a lot of the dreary busy typing work.

gizmo
AI writing is pretty bad, AI code is pretty bad, AI art is pretty bad. We all know this. But it's easy to forget how many new opportunities open up when something becomes 100x or 10000x cheaper. Things that are 10x worse but 100x cheaper are still extremely valuable. It's the relentless drive to making things cheaper, even at the expense of quality, that has made our high quality of life possible.

You can make houses by hand out of beautiful hardwood with complex joinery. Houses built by expert craftsmen are easily 10x better than the typical house built today. But what difference does that make when practically nobody can afford it? Just like nobody can afford to have a 24/7 tutor that speaks every language, can help you with your job, grammar check your work, etc.

AI slop is cheap and cheapness changes everything.

Toorkit
Computers were supposed to be these amazing machines that are super precise. You tell it to do a thing, it does it.

Nowadays, it seems we're happy with computers apparently going RNG mode on everything.

2+2 can now be 5, depending on the AI model in question, the day, and the temperature...

Validark
One thing that I hate about the post-ChatGPT world is that people's genuine words or hand-drawn art can be classified as AI-generated and thrown away instantly. What if I wanted to talk at a conference and used somebody's AI trigger word so they instantly rejected me even if I never touched AI at all?

This has already happened in academia where certain professors just dump(ed) their student's essays into ChatGPT and ask it if it wrote it, and fail anyone who had their essay claimed by ChatGPT. Obviously this is beyond moronic, because ChatGPT doesn't have a memory of everything it's ever done, and you can ask it for different writing styles, and some people actually write pretty similar to ChatGPT, hence the fact that ChatGPT has its signature style at all.

I've also heard of artists having their work removed from competitions out of claims that it was auto-generated, even when they have a video of them producing it stroke by stroke. It turns out, AI is generating art based on human art, so obviously there are some people out there whose stuff looks like what AI is reproducing.

mks
I am bored of AI - it produces boring and mediocre results. Now, the science and engineering achievement is great - being able to produce even boring results on this level would be considered SCI-FI 10 years ago.

Maybe I am just bored of people posting these mediocre results over and over on social and landing pages as some kind of magic. Now, the most content people produce themselves is boring and mediocre anyway. The Gen AI just takes away even the last remaining bits of personality from their writing, adding a flair of laziness - look at this boring piece I was too lazy to write, so I asked AI to generate it

As the quote goes: "At some point we ask of the piano-playing dog not 'Are you a dog?' , but 'Are you any good at playing the piano?'" - I am eagerly waiting for the Gen AIs of today to cross the uncanny valley. Even with all this fatigue, I am positive on the AI can and will enable new use cases and could be the first major UX change from introduction of graphical user interfaces or a true pixie dust sprinkled on actually useful tools.

willguest
Leave it up to a human to overgeneralize a problem and make it personal...

The explosion of dull copy and generic wordsmithery is, to me, just a manifestation of the utilitarian profiteering that has elevated these models to their current standing.

Let us not forget that the whole game is driven by the production of 'more' rather than 'better'. We would all rather have low-emission, high-expression tools, but that's simply not what these companies are encouraged to produce.

I am tired of these incentive structures. Casting the systemic issue as a failure of those who use the tools ignores the underlying motivation and keeps us focused on the effect and not the cause, plus it feels old-fashioned.

Devasta
In Star Trek, one thing that I always found weird as a kid is they didn't have TVs. Even if the holodeck is a much better experience, I imagine sometimes you would want to watch a movie and not be in the movie. Did the future not have works like No Country for Old Men or comedies like Monty Python, or even just stuff like live sports and the news?

Nowadays we know why the crew of the enterprise all go to live performances of Shakespeare and practice musical instruments and painting themselves: electronic mediums are so full of AI slop there is nothing worth see, only endless deluges of sludge.

canxerian
I'm a software dev and I'm tired of LLMs being crowbar'd in to every single product I build and use, to the point where they are unanimously and unequivocally used over better, cheaper and simpler solutions.

I'm also tired of people who claim to be excited by AI. They are the dullest of them all.

kingkongjaffa
Generally, the people who seriously let genAI write for them without copious editing, were the ones who were bad writers, with poor taste anyway.

I use GenAI everyday as an idea generator and thought partner, but I would never simply copy and paste the output somewhere for another person to read and take seriously.

You have to treat these things adversarially and pick out the useful from the garbage.

It just lets people who created junk food, create more junk food for people who consume junk food. But there is the occasional nugget of good ideas that you can apply to your own organic human writing.

KaiserPro
I too am not looking forward to industrial scale job disruption that AI brings.

I used to work in VFX, and one day I want to go back to it. However I suspect that it'll be entirely hollowed out in 2-5 years.

The problem is that like typesetting, typewriting or the wordprocessor, LLMs makes writing text so much faster and easier.

The arguments about handwriting vs type writer are quite analogous to LLM vs pure hand. People who are good and fast at handwriting hated the type writer. Everyone else embraced it.

The ancient greeks were deeply suspicious about the written word as well:

> If men learn this[writing], it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

I don't like LLMs muscling in and kicking me out of things that I love. but can I put the genie back in the bottle? no. I will have to adapt.

throwaway13337
I get it. The last two decades have soured us all on the benefits of tech progress.

But the previous decades were marked by tech optimism.

The difference here is the shift to marketing. The largest tech companies are gatekeepers for our attention.

The most valuable tech created in the last two decades was not in service of us but to manipulate us.

Previously, the customer of the software was the one buying it. Our lives improved.

The next wave of tech now on the horizon gives us an opportunity to change the course we’ve been on.

I’m not convinced there is political will to regulate manipulation in a way that does more good than harm.

Instead, we need to show a path to profitability through products that are not manipulative.

The most effective thing we can do, as developers and business creators, is to again make products aligned with our customers.

The good news is that the market for honest software has never been better. A good chunk of people are finally learning not to trust VC-backed companies that give away free products.

Generative AI provides an opportunity for tiny companies to provide real value in a new way that people will pay for.

The way forward is:

1. Do not accept VC. Bootstrap.

2. Legally bind your company to not productizing your customer.

3. Tell everyone what you’re doing.

It’s not AI that’s the problem. It’s the way we have been doing business.

franciscop
> "Yes, I realize that thinking like this and writing this make me a Neo-Luddite in your eyes."

Not quite, I believe (and I think anyone can) both that AI will likely change the world as we know it, AND that right now it's over-hyped to a point that it gets tiring. For me this is different from e.g. NFTs, "Big Data", etc. where I only believed they were over-hyped but saw little-to-no substance behind them.

senko
What's funny to me is how many people protest AI as a means to generate incorrect, misleading or fake information, as if they haven't used internet in the past 10-15 years.

The internet is choke full of incorrect, fake, or misleading information, and has been ever since people figured out they can churn out low quality content in-between google ads.

There's a whole industry of "content writers" who write seemingly meaningful stuff that doesn't bear close scrutiny.

Nobody has trusted product review sites for years, with people coping by adding "site:reddit" as if a random redditor can't engage in some astroturfing.

These days, it's really hard to figure out whom (in the media / on the net) who to trust. AI has just made that long-overdue fact into the spotlight.

thewarrior
I’m tired of farming - Someone in 5000 BC

I’m tired of electricity - Someone in 1905

I’m tired of consumer apps - Someone in 2020

The revolution will happen regardless. If you participate you can shape it in the direction you believe in.

AI is the most innovative thing to happen in software in a long time.

And personally AI is FUN. It sparks joy to code using AI. I don’t need anyone else’s opinion I’m having a blast. It’s a bit like rails for me in that sense.

This is HACKER news. We do things because it’s fun.

I can tackle problems outside of my comfort zone and make it happen.

If all you want to do is ship more 2020s era B2B SaaS till kingdom come no one is stopping you :P

wrasee
For me what’s important is that you are able to communicate effectively. If you use language tools, other tools or even a real personal assistant if you effectively communicate the point that ultimately is yours in the making I expect that that is ultimately is what is important and will ultimately win out.

Otherwise this is just about style. That’s important where personal creative expression is important, and in fairness to the article the author hits on a few good examples here. But there are a lot of times where personal expression is less important or even an impediment to what’s most important: communicating effectively.

The same-ness of AI-speak should also diminish as the number and breadth of the technologies mature beyond the monoculture of ChatGPT, so I’m also not too worried about that.

An accountant doesn’t get rubbished if they didn’t add up the numbers themselves. What’s important is that the calculation is correct. I think the same will be true for the use of LLMs as a calculator of words and meaning.

This comment is already too long for such a simple point. Would it have been wrong to use an LLM to make it more concise, to have saved you some of your time?

slicktux
I like AI… for me it’s a great way of getting the ‘average’ of a broad array of answers to a single question but without all the ads I would get from googling and searching pages. For example, when searching for times to cook or grams of sugar to add to my gallon of iced tea…or instant pot cooking times.

For more technical things STEM related it’s a good way to get a base line or direction; enough for me to draw my own conclusions or implementations…it’s like a rubber ducky I can talk to.

xena
My last job made me shill for AI stuff because GPUs have a lot of income potential. One of my next ones is going to make me shill for AI stuff because it makes people deal with terrifying amounts of data.

I understand why this is the case, but it's still kinda disappointing. I'm hoping for an AI winter so that I can talk about normal uses of computers again.

ryanjshaw
> There are no shortcuts to solving these problems, it takes time and experience to tackle them.

> I’ve been working in testing, with a focus on test automation, for some 18 years now.

OK the first thought that came to my mind reading this: sounds like a opportunity to build an AI-driven product.

I've been using Cursor daily. I use nothing else. It's brilliant and I'm very happy. If I could have Cursor for Well-Designed Tests I'd be extra happy.

1vuio0pswjnm7
What's the next hype after "AI". And what is next after that. Maybe we can just skip it all.
est
AI acts like a bad intern these days, and should be treated like one. Give it more guidance and don't make important tasks depending it.
jeswin
> But I’m pretty sure I can do without all that ... test cases ...

Test cases?

I did a Show HN [1] a couple of days back for a UI library built almost entirely with AI. Gpt-o1 generated these test cases for me: https://github.com/webjsx/webjsx/tree/main/src/test - in minutes instead of days. The quality of test cases are comparable to what a human would produce.

75% of the code I've written in the last one year has been with AI. If you still see no value in it (especially with things like test cases), I'm afraid you haven't figured out how to use AI as a tool.

[1]: https://news.ycombinator.com/item?id=41644099

heystefan
Not sure why this is front page material.

The thinking is very surface level ("AI art sucks" is the popular opinion anyway) and I don't understand what the complaints are about.

The author is tired of AI and likes movies created by people. So just watch those? It's not like we are flooded with AI movies/music. His social network shows dull AI-generated content? Curate your feed a bit and unfollow those low effort posters.

And in the end, if AI output is dull, there's nothing to be afraid of -- people will skip it.

snowram
I quite like some parts of AI. Ray reconstruction and supersampling methods have been getting incredible and I can now play games with twice the frames per seconds. On the scietific side, meteorological predictions and protein folding have made formidable progresses thanks to it. Too bad this isn't the side of AI that is in the spotlight.
nuc1e0n
I tried to learn AI frameworks. I really did. But I just don't care about them. AI as it is today just isn't useful to me. Databases and search engines are reliable. The output of AI models is totally unreliable.
monkeydust
AI is not just GenAI, ML sits underneath it (supervised, unsupervised) and that has genuinely delivered value for the clients we service (financial tech) and in my normal life (e.g. photo search, screen grab to text, book recommendations).

As for GenAI I keep going back to expectation management, its very unlikley to give you the exact answer you need (and if it does then well you job longetivity is questionable) but it can help accelerate your learning, thinking and productivity.

throwaway123198
I'm bored of IT. Software is boring, AI included. None of this feels like progress. We've automated away white collar work...but we also acknowledge most white collar work is busy work that's considered a bullcr*p job. We need to get back to innovation in manufacturing, materials etc. i.e. the real world.
zombiwoof
the most depressing thing for me is the rush and all out hype. i mean, Apple not only renamed AI "Apple Intelligence" but if you go INTO a Apple Store, it's banner is everywhere, even as a wallpaper on the phones with the "glow"

But guess what isn't there? An actually shipping IMPLEMENTATION. It's not even ready yet but the HYPE is so overblown.

Steve Jobs is crying in his grave how stupid everyone is being about this.

mark_l_watson
Nice thoughts. Since 1982 half my work has been in one of the fields loosely called AI and the other half more straight up software development. After mostly been doing deep learning and now LLM for almost ten years, I miss conventional software development.

When I was swimming this morning I thought of writing a RDF data store with partial SPARQL support in Racket or Common Lisp - basically trade a year of my time to do straight up design and coding, for something very few people would use.

I get very excited by shiny new things like advance voice interface for ChatGPT and NoteBookLM, both fine product ideas and implementations, but I also feel some general fatigue.

EMM_386
The one use of AI that annoys me the most is Google trying to cram it into search results.

I don't want it there, I never look at it, it's wasting resources, and it's a bad user experience.

I looked around a bit but couldn't see if I can disable that when logged in. I should be able to.

I don't care what the AI says ... I want the search results.

ricardobayes
I personally don't see AI as the new Internet, as some claim it to be. I see it more as the new 3D-printing.
seydor
> same massive surge I’ve seen in the application of artificial intelligence (AI) to pretty much every problem out there

I have not. Perhaps programming on the initial stages is the most 'applied' AI but there is still not a single major AI movie and no consumer robots.

I think it's way too early to be tired of it

Smithalicious
Do people really view so much content of questionable provenance? I read a lot and look at a lot of art, but what I read and look at is usually shown to me by people I know, created by authors and artists with names and reputations. As a result I basically never see LLM-written text and only occasionally AI art, and when I see AI art at least it was carefully guided by a real person with an artistic vision still (the deep end of AI image generation involves complex tooling and a lot of work!) and is easily identified as such.

All this "slop apocalypse" the-end-is-neigh stuff strikes me as incredibly overblown, affecting mostly only "open web" mass social media platforms which were already 90% industrially produced slop for instrumental purposes anyways.

me551ah
People talk about 'AI' as if stackoverflow didn't exist. Re-inventing the wheel is something that programmers don't do anymore. Most of the time, someone somewhere has solved the problem that you are solving. Programming earlier used to be about finding these solutions and repurposing them for your needs. Now it has changed to asking AI, the exact question and it being a better search engine.

The gains to programming speed and ability are modest at best, the only ones talking about AI replacing programmers are people who can't code. If anything AI will increase the need for more programmers, because people rarely delete code. With the help of AI, code complexity is going to go through the roof, eventually growing enough to not fit into the context windows of most models.

alentred
I am tired of innovations being abused. AI itself is super exciting and fascinating. But, it being abused -- to generate content to drive more ad-clicks, or the "Now better with AI" promise on every landing page, etc. etc. -- that I am tired of, yes.
sensanaty
What I'm really tired of is people completely misrepresenting the Luddites as if they were simply anti-progress or anti-technology cult or whatever and nothing else. Kinda hilariously sad that the propaganda of the time managed to win over the genuine concerns that Luddites had about inhumane working environments & conditions.

It's very telling that the rabid AI sycophants are painting anyone who has doubts about the direction AI will take the world as some sort of anti-progress lunatic, calling them luddites despite not knowing the actual history involved. The delicious irony of their stances aligning with the people who were okay with using child labor and mistreating workers en-masse is not lost on me.

My hope is that AI does happen, and that the first people to rot away because of it are exactly the AI sycophants hell-bent on destroying everything in the name of "progress", AKA making some rich psychopaths like Sam Altman unfathomably rich and powerful to the detriment of everyone else.

A good HN thread on the topic of luddites, as it were: https://news.ycombinator.com/item?id=37664682

thih9
Doesn’t that kind of change follow the overall trend?

We continuously shift to higher level abstractions, trading reliability for accessibility. We went from binary to assembly, then to garbage collection and to using electron almost everywhere; ai seems yet another step.

drillsteps5
I've always thought that "actual" AI (I guess it's mostly referred to as "General AI" now) will require a feedback loop and continuous unsupervised learning. As in the system decides on an action, executes, receives the feedback, assesses the situation in relation to the goals (positive and negative reinforcement), corrects (adjusts the network), and the cycle repeats. This is not the case with current generative AI, where the network is trained (reinforced learning) and then the snapshot of the trained network is used to produce output. This works for some limited number of applications but this will never produce General AI, because there's no feedback loop. So it's a bit of a gimmick.
eleveriven
AI is a tool, and like any tool, it's only as good as how we choose to use it.
bane
I feel sorry for the young hopeful data scientists who got into the field when doing data science was still interesting and 95% of their jobs hadn't turned over into tuning the latest LLM to poorly accomplish some random task an executive thought up.

I know a few of them and once they started riding the hype curve for real, the luster wore off and they're all absolutely miserable in their jobs and trying to look for exits. The fun stuff, the novel DL architectures, coming up with clever ways to balance datasets or label things...it's all just dried up.

It's even worse than the last time I saw people sadly taking the stairs down the other end of the hype cycle when bioinformatics didn't explode into the bioeconomy that had been promised or when blockchain wasn't the revolution in corporate practices that CIOs everywhere had been sold on.

We'll end up with this junk everywhere eventually, and it'll continue to commoditize, and that's why I'm very bearish on companies trying to make LLMs their sole business driver.

AI is a feature, not the product.

shswkna
The elephant in the room is this question:

What do we value? What is our value system made up of?

This is, in my opinion, the Achille‘s heel of the current trajectory of the West.

We need to know what we are doing it for. Like the OP said, he is motivated by the human connectedness that art, music and the written word inspire.

On the surface, it seems we value the superficial smuckness of LLM-produced content more.

This is a facade, like so many other superficial artifacts of our social life.

Imperfect authenticity will soon (or sometime in the future) become a priceless ideal.

unraveller
If you go back to the earliest months of the audio & visual recording medium it was also called uncanny, soulless and of dubious quality compared to real life. Until it wasn't.

I don't care how many repulsive AI slop video clips get made or promoted for shock value. Today is day 1 and day 2 looks far better with none of the parasocial celebrity hangups we used as short-hand for a quality marker - something else will take that place.

zone411
The author is in for a rough time in the coming years, I'm afraid. We've barely scratched the surface with AI's integration into everything. None of the major voice assistants even have proper language models yet, and ChatGPT only just introduced more natural, low-latency voices a few days ago. Software development is going to be massively impacted.
pilooch
By AI here, it is meant generative systems relying on neural networks and semi/self supervised training algorhms.

It's a reduction of what AI is as a computer science field and even of what the subfield of generative AI is.

On a positive note, generative AI is a malleable statiscally-geounded technology with a large applicative scope. At the moment the generalistic commercial and open models are "consumed" by users, developers etc. But there's a trive of forthcoming, personalized use cases and ideas to come.

It's just we are still more in a contemplating phase than a true building phase. As a machine learnist myself, I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images. And this is the early early beginning, imagination and local personalization will emerge.

So I'd say, being tired of it now is missing much later. Keep the good spirit on and think outside the box, relax too :)

richrichardsson
What frustrates me is the bandwagoning, and thus the awful homogeny in all social media copy these days, since it seems everyone is using an LLM to generate their copy writing, and thus 99.999% of products will "elevate" something or the other, and there are annoying emojis scattered throughout the text.
nasaeclipse
At some point, I wonder if we will go more analog again. How do we know if a book was written by a human? Simple, he used a typewriter or wrote it by hand!

Photos? Real film.

Video.... real film again lol.

I think that may actually happen at some point.

fulafel
Coming from a testing specialist - the facts are right but the framing seems negatively biased. For the generalist who wants to get some Playwright tests up, the low hanging fruit is definitely helped a lot by generative AI. So I emphatically disagree with "there are no shortcuts".
CodeCompost
We're all tired of it, but to ignore it is to be unemployed.
jillesvangurp
I'm actually excited about AI. With a dose of realism. But I benefit from LLMs on a daily basis now. There are a lot of challenges with LLMs but they are useful tools and we haven't really seen much yet. It's only been two years since chat gpt was released. And mostly we're still consuming this stuff via chat UIs, which strikes me as sub optimal and is something I hope will change soon.

The increases in context size are helping a lot. The step improvement in reasoning abilities and quality of answers is amazing to watch. I'm currently using chat gpt o1 preview a lot for programming stuff. It's not perfect but I can use a lot of what it generates and this is saving me a lot of time lately. It still gets stuff wrong and there's a lot of stuff it doesn't know.

I also am mildly addicted to perplexity.ai. Just a wonderful tool and I seem to be getting in the habit of asking it about anything that pops into my mind. Sometimes it's even work related.

I get that people are annoyed with all the hyperbolic stuff in the media on this topic. But at the same time, the trends here are pretty amazing. I'm running the 3B parameter llama 3.2 model on a freaking laptop now. A nice two year old M1 with only 16GB. It's not going to replace bigger models for me. But I can see a few use cases for running it locally.

My view is very simple. I'm a software developer. I grew up a few decades ago before there was any internet. I had no clue what a computer even was until I was in high school. Things like Knight Rider, Star Trek, Buck Rogers, Star Wars etc. all featured forms of AIs that are now more or less becoming science fact. C3PO is pretty dumb compared to chat gpt actually. You could build something better and more useful these days. That would mostly an art and crafts project at this point. No special skills required. Just use an LLM to generate the code you need. Nice project for some high school kids.

Which brings me to my main point. We're the old generation. Part of being old is getting replaced by younger people. Young people are growing up with this stuff. They'll use it to their advantage and they are not going to be held back by old fashioned notions about the way the things should work according to us old people. The thing with Luddites is that they exist in any generation. And then they grow tired, old, and then they die off. I have no ambition to become irrelevant like that.

I'm planning to keep up with young people as long as I can. I'll have to give that up at some point but not just yet. And right now that includes being clued in as much as I can about LLMs and all the developer plumbing I need to use them. This stuff is shockingly easy. Just ask your favorite LLM to help you get started.

Janicc
Without any sort of AI we'd probably be left with the most exciting yearly releases being 3-5% performance increases in hardware (while being 20% more expensive of course), the 100000th javascript framework and occasionally a new windows which everybody hates. People talk about how population collapse is going to mess up society, but I think complete stagnation in terms of new consumer goods/technology are just as likely to do the deed. Maybe AI will fail to improve from this point, but that's a dark future to imagine. Especially if it's for the next 50 years.
pech0rin
As an aside its really interesting how the human brain can so easily read an AI essay and realize its AI. You would think that with the vast corpus these models were trained on there would be a more human sounding voice.

Maybe it's overfitting or maybe just the way models work under the hood but any time I see AI written stuff on twitter, reddit, linkedin its so obvious its almost disgusting.

I guess its just the brain being good at pattern matching, but it's crazy how fast we have adapted to recognize this.

shahzaibmushtaq
Over the last few years, AI has become more common than HI generally, not professionally. Professional knows the limits and scopes of their works and responsibilities, not the AI.

A few days ago, I visited a portfolio website and immediately realized that its English text was written with the help of AI or some online helper tools.

I love the idea to brainstorming with AI, but copying-pasting anything it throws at you blocks you for adding creativity to the process of making something good.

I believe using AI must complement HI (or IQ level) rather than mock it.

amradio
We can’t compare AI with an expert. There’s going to be little value there. AI is about as capable as your average college grad in any subject.

What makes AI revolutionary is what it does for the novice. They can produce results they normally couldn’t. That’s huge.

A guy with no development experience can produce working non-trivial software. And in a fraction of the time your average junior could.

And this phenomenon happens across domains. All of a sudden the bottom of the skill pool is 10x more productive. That’s major.

resters
AI (LLMs in this case) reduce the value of human conscientiousness, memory, and verbal and quantitative fluency dramatically.

So what's left for humans?

We very likely won't have as many human software testers or software engineers. We'll have even fewer lawyers and other "credentialed" knowledge worker desk jockeys.

Software built by humans entails humans writing code that has not already been written -- by writing a lot of code that probably has already been written and "linking" it together, etc. When's the last time most of us wrote a truly novel algorithm?

In the AI powered future, software will be built by humans herding AIs to build it. The AIs will do more of the busy work and the humans will guide the process. Then better AIs will be more helpful at guiding the process, etc.

Eventually, the thing that will be rewarded is truly novel ideas and truly innovative thinking.

AIs will make varioius types of innovative thinking less valuable and various types more valuable, just like any technology has done.

In the past, humans spent most of their brain power trying to obtain their next meal. It's very cynical to think that AI removing busy work will somehow leave humans with nothing meaningful to do, no purpose. Surely it will unlock the best of human potential once we don't have to use our brains to do repetitive and highly pattern-driven tasks just to put food on the table.

When is the last time any of us paid a laywer to do something truly novel? They dig up boilerplate verbiage, follow standard processes, rinse, repeat, all for $500+ per hour.

Right now we have "manual work" and "knowledge work", broadly speaking, and both emphasize something that is being produced by the worker (a construction project, a strategic analysis, a legal contract, a diagnosis, a codebase, etc.)

With AI, workers will be more responsible for outcomes and less rewarded for simply following a procedure that an LLM can do. We hire architects with visual / spatial design skills rather than asking a contractor to just create a living space with a certain amount of square feet. The emphasis in software will be less on the writing of the code and more on the impact of the code.

creesch
I fully agree with this sentiment, also interesting to see Bas Dijkstra being featured on this platform.

Another article that touches on a lot of the issues I have with the place AI currently occupies in the landscape is this excellent article: https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you...

redandblack
Having spent the last decade hearing about trustless-trust,and now faced with this decade in dealing with with no-trust-whatsoever.

We started with dont-trust-the-government and the dont-trust-big-media and to dont-trust-all-media and eventually to a no-trust-society. Lovely

Really, waiting for the AI feedback to converge on itself. Get this over soon please

sedatk
I remember being awestruck at the first avocado chair images DALL-E generated. So many possibilities ahead. But, we ended up with all oversaturated, color-soup, greasy, smooth pictures everywhere because as it turns out, beauty is in the eye of the prompter.
izwasm
Im tired of people throwing chatgpt everywhere they can just to say they use AI. Even if it's a useless feature
semiinfinitely
A software tester tired of AI? Not surprising given that this is like the first job which AI will replace.
BodyCulture
I would like to know how does AI help us in solving the climate crisis! I have read some articles about weather predictions getting better with the help of AI, but that is just the monitoring, I would like to see more actual solutions.

Do you have any recommendations?

Thanks!

warvair
90% of everything is crap. Perhaps AI will make that 99% in the future. OTOH, maybe AI will slowly convert that 90% into 70% crap & 20% okay. As long as more stuff that I find good gets created, regardless of the percentage of crap I have to sift through, I'm down.
visarga
> I’m pretty sure that there are some areas where applying AI might be useful.

How polite, everyone is sure AI might be useful in other fields just not their own.

> people are scared that AI is going to take their jobs

Can't be both true - AI being not really useful, and AI taking our jobs.

lvl155
What really gets me about AI space is that it’s going the way of front-end development space. I also hate the fact that Facebook/Meta is the only one seemingly doing heavy lifting in the public space. It’s great so far but I just don’t trust them in the end.
Meniceses
I love AI.

In comparision to a lot of other technologies, we actually have jumps in quality left and right, great demos, new things which are really helpful.

Its fun to watch the AI news because there is something relevant new happening.

I'm worried regarding the impact of AI but this is a billion times better than the last 10 years which was basically just cryptobros, nfts, blockchain shit which is basically just fraud.

Its not just some GenAI stuff, we talk about blind people getting better help through image analysis, we talk about alpha fold, LLMs being impressive as hell, the research currently happening.

And yes i also already see benefits in my job and in my startup.

whoomp12342
here is where you are wrong about AI lacking creativitivy:

AI Music is bland and boring. UNLESS IF YOU KNOW MUSIC REALLY WELL. As a matter of fact, it can SPAWN poorly done but really interesting ideas with almost no effort

"What if curt cobain wrote a song that was then sung by johnny cash about waterfalls in the west" etc.

That idea is awful, but when you generate it, you might get snippets that could turn into a wholey new HUMAN made song.

The same process is how I forsee AI helping engineering. its not replacing us, its inspring us.

jaakl
It seems Claude (3.5 Sonnet) provided the longest summary for this discussion using basic single shot prompt for me:

After reviewing the Hacker News thread, here are some of the main repeating patterns I observed:

* Fatigue and frustration with AI hype: Many commenters expressed being tired of the constant AI hype and its application to every domain. * Concerns about AI-generated content quality: There were recurring worries about AI producing low-quality, generic, or "soulless" content across various fields. * Debate over AI's impact on jobs and creativity: Some argued AI would displace workers, while others felt it was just another tool that wouldn't replace human creativity and expertise. * Skepticism about AI capabilities: Several commenters felt the current AI systems were overhyped and not as capable as claimed. * Copyright and ethical concerns: Many raised issues about AI training on copyrighted material without permission or compensation. * Polarized views on AI's future impact: There was a split between those excited about AI's potential and those worried about its negative effects. * Comparisons to previous tech hypes: Some likened the AI boom to past technology bubbles like cryptocurrency or blockchain. * Debate over regulation: Discussion on whether and how AI should be regulated. * Concerns about AI's environmental impact: Mentions of AI's large carbon footprint. * Meta-discussion about HN itself: Comments about how the discourse on HN has changed over time, particularly regarding AI. * Capitalism critique: Some framed issues with AI as symptoms of larger problems with capitalism. * Calls for embracing vs rejecting AI: A divide between those advocating for adopting AI tools and those preferring to avoid them.

These patterns reflect a community grappling with the rapid advancement and widespread adoption of AI technologies, showcasing a range of perspectives from enthusiasm to deep skepticism.

sovietmudkipz
I am tired and hungry…

The thing I’m tired of is elites stealing everything under the sun to feed these models. So funny that copyright is important when it protects elites but not when a billion thefts are committed by LLM folks. Poor incentives for creators to create stuff if it just gets stolen and replicated by AI.

I’m hungry for more lawsuits. The biggest theft in human history by these gang of thieves should be held to account. I want a waterfall of lawsuits to take back what’s been stolen. It’s in the public’s interest to see this happen.

buddhistdude
some of the activities that we're involved in are not limited in complexity, for example driving a car. you can have a huge amount of experience in driving a car but will still face new situations.

the things that most knowledge workers are working on are limited problems and it is just a matter of time until the machine will reach that level, then our employment will end.

edit: also that doesn't have to be AGI. it just needs to be good enough for the problem.

cedws
Current generative AI is a set of moderately useful/interesting technology that has been artificially blown up into something way bigger.

If you've been paying any attention for the past two decades, you'll have noticed that capitalism has had a series of hype cycles. Post COVID, Western economies are on their knees, productivity is faltering and the numbers aren't checking out anymore. Gen AI is the latest hype cycle, and it has been excellent for generating hype with clueless VCs and extracting money from them, and generating artificial economic activity. I truly think we are in deep shit when this bubble pops, it seems to be the only thing propping up our economies and staving off a wider bear market.

I've heard some say that this is all just the beginning and AGI is 2 years away because... Moore's law and that somehow applies to LLM benchmarks. Putting aside that this completely nonsensical idea, LLM performance is quite clearly not on any kind of exponential curve by now.

brailsafe
I mean, I'm at most fine with being able to occasionally use an llm for a low-risk, otherwise predictable, small-surface area, mostly boilerplate set of problems I shouldn't be spending energy on anyway. I'm excited about potentially replacing my ( to me) recentish 2019 macbook pro with an M4, if it's a good value for me. However, I have zero interest in built-in AI features of any product, and it hasn't even crossed my mind why it would. The novelty wore off last year, and its presence in my OS is going to be at most incidental to the efficiency advantages of the hardware advancements; at worst, it'll annoy the hell of me and I'll look for ways to permanently disable any first-party integration. Haven't even paid attention to the news around what will be coming in the latest MacOS, but I'm hoping it'll be ignorable like the features that exist for iPhone users are.
danjl
One of the pernicious aspects of using AI is the feeling it gives you that you have done all work without any of the effort. But the time of takes to digest and summarize an article as a human requires a deep injestion of the concepts. The process is what helps you understand. The AI summary might be better, and didn't take any time, but you don't understand any of it since you didn't do the work. It's similar to the effect of telling people you will do a task, which gives your brain the same endorphins as actually doing the task, resulting in a lower chance that the task ever gets done.
sirsinsalot
If humans have a talent for anything, it is mechanising the pollution of the things we need most.

The earth. Information. Culture. Knowledge.

chalcolithic
In Soviet planet Earth AI gets tired of you. That's what I expect future to be like, anyways
mrmansano
I love AI, I use it every single day and wouldn't consider myself a luddite, but... oh, boy... I hate the people that is too bullish on it. Not the people that is working to make the AI happen (although I have my __suspicious people radar__ pointing to __run__ every single time I see Sam Altman face anywhere), but the people that hypes it to ground, the "e/acc" people. I feel like the crypto-bros just moved from the "all-might decentralized coin god" hype to the "all might tech-god that for sure will be available soon". Looks like a cult or religion is forming around the singularity, and, if I hype it now, it will be generous to me when it takes the control. Oh, and if you don't hype then you're a neo-luddite/doomer and I will look up on you with disdain, as you are a mere peasant. Also, the get-rich-quick schemes forming around the idea that anyone can have a "1-person-1-billion-dollar" company with just AI, not realizing when anyone can replicate your product then it won't have any value anymore: "ChatGPT just made me this website to help classify if an image is a hot-dog or not! I'll be rich selling it to Nathan's - Oh, what's that? Nathan's just asked ChatGPT to create a hot-dog classifier for them?!" Not that the other vocal side is not as bad: "AI is useless", "It's not true intelligence", "AI will kill us all", "AI will make everyone unemployed in 6 months!"... But the AI tech-bros side can be more annoying in my personal experience (I'm sure the opposite is true for others too). All those people are tiring, and making AI tiring for some too... But the tech is fun and will keep evolving and present, rather we are tired of it or not.
kvnnews
I’m not the only one! Fuck ai, fuck your algorithm. It sucks.
cutler
Me too :)
the_clarence
I think it's awesome personally
ninetyninenine
This guy doesn’t get it. The technology is quickly converging on a point where no one can recognize whether it was written by AI or not.

The technology is on a trend line where the output of these LLMs can be superior to most human writing.

Being of tired of this is the wrong reaction. Being somewhat fearful and in awe is the correct reaction.

You can thank social media constantly hammering us with headlines as the reason why so many people are “over it”. We are getting used to it but make no mistake being “over it” is n an illusion. LLMs represent a milestone in technological achievement among humans and being “over it” or claiming all LLMs can never reason and output is just a copy is delusional.

kaonwarb
> It has gotten so bad that I, for one, immediately reject a proposal when it is clear that it was written by or with the help of AI, no matter how interesting the topic is or how good of a talk you will be able to deliver in person.

I am sympathetic to the sentiment, and yet worry about someone making impactful decisions based on their own perception of whether AI was used. Such perceptions have been demonstrated many times recently to be highly faulty.

hcks
Hackernews when we may be on the path of actual AI "meh I hate this, you know what’s actually really interesting? Manually writing tests for software"
AlienRobot
I'm tired of technology.

I don't think there has ever been a single tech news that brought me joy in all my life. First I learned how to use computers, and then it has been downhill ever since.

Right now my greatest joy is in finding things that STILL exist rather than new things, because the things that still exist are generally better than anything new.

habosa
I refuse to work on AI products. I refuse to use AI in my work.

It’s inescapable that I will work _near_ AI given that I’m a SWE and I want to get paid, but at least by not actively advancing this bullshit I’ll have a tiny little “wasn’t me” I can pull out when the world ends.

ETH_start
That's fine he can stick with his horse and buggy. Cognition is undergoing its transition to automobiles.
amiantos
I'm tired of people complaining about AI stuff, let's move on already. But based on the votes and engagement on this post, complaining about AI is still a hot ticket to clicks and attention, even if people are just regurgitating the exact same boring takes that are almost always in conflict with each other: "AI sure is terrible, isn't it? It can't do anything right. It sucks! It's _so bad_. But, also, I am terrified AI is going to take my job away and ruin my way of life, because AI is _so good_."

Make up your mind, people. It reminds me of anti-Apple people who say things like "Apple makes terrible products and people only buy them because... because... _they're brainwashed!_" Okay, so we're supposed to believe two contradictory points at once: Apple products are very very bad, but also people love them very much. In order to believe those contradictory points, we must just make up something to square them, so in the case of Apple it's "sheeple!" and in the case of AI it's... "capitalism!" or something? AI is terrible but everyone wants it because of money...? I don't know.

andai
daniel_k 53 minutes ago | next [-]

I agree with the sentiment, especially when it comes to creativity. AI tools are great for boosting productivity in certain areas, but we’ve started relying too much on them for everything. Just because we can automate something doesn’t mean we should. It’s frustrating to see how much mediocrity gets churned out in the name of ‘efficiency.’

testers_unite 23 minutes ago | next [-]

As a fellow QA person, I feel your pain. I’ve seen these so-called AI test tools that promise the moon but deliver spaghetti code. At the end of the day, AI can’t replicate intuition or deep knowledge. It’s just another tool in the toolbox—useful in some contexts but certainly not a replacement for expertise.

nlp_dev 2 hours ago | next [-]

As someone who works in NLP, I think the biggest misconception is that AI is this magical tool that will solve all problems. The reality is, it’s just math. Fancy math, sure, but without proper data, it’s useless. I’ve lost count of how many times I’ve had to explain this to business stakeholders.

-HN comments for TFA, courtesy of ChatGPT

tonymet
Assuming that people tend to pursue the expedient and convenient solution, AI will degrade science and art until only a facsimile of outdated creativity is left.
scotty79
AI was just trained so far to generate corporate bs speak in a corporate bs format. That's why it's tiring. More unique touch in communication will come later as fine tunings and loras (if possible) of those models are shared.
yapyap
Same.
farts_mckensy
I am tired of people saying, "I am tired of AI."
fallingknife
I'm not. I think it's awesome and I can't wait to see what comes out next. And I'm completely OK with all of my work being used to train models. Bunch of luddites and sour grapes around here on HN these days.
paulcole
> AI’s carbon footprint is reaching more alarming levels every day

It really really really really isn’t.

I love how people use this argument for anything they don’t like – crypto, Taylor Swift, AI, etc.

Everybody in the developed world’s carbon footprint is disgusting! Even yours. Even mine. Yes, somebody else is worse than me and somebody else is worse than you, but we’re all still awful.

So calling out somebody else’s carbon footprint is the most eye-rolling “argument” I can imagine.

littlestymaar
It's not AI you hate, it's Capitalism.
AI_beffr
in 2018 we had the first gtp that would babble and repeat itself but would string together words that were oddly coherent. people dismissed any talk of these models having larger potential. and here we are today with the state of AI being what it is and people are still, in essence, denying that AI could become more capable or intelligent than it is right at this moment. after so many years of this zombie argument having its head chopped off and then regrowing, i can only think that it is peoples deep seated humanity that prevents them from seeing the obvious. it would be such a deeply disgusting and alarming development if AI were to spring to life that most people, being good people, are literally incapable of believing that its possible. its their own mind, their human sensibilities, protecting them. thats ok. but it would help keep humanity safe if more people had the ability to realize that there is nothing stopping AI from crossing that threshold and every heuristic is pointing to the fact that we are on the cusp of that.
tananan
On point article, and I'm sure it represents a common sentiment, even if it's an undercurrent to the hype machine ideology.

It is quite hard to find a place which works on AI solutions where a sincere, sober gaze would find anything resembling the benefits promised to users and society more broadly.

On the "top level" the underlying hope is that a paradigm shift for the good will happen in society, if we only let collective greed churn for X more years. It's like watering weeds hoping that one day you'll wake up in a beautiful flower garden.

On the "low level", the pitch is more sincere: we'll boost process X, optimize process Y, shave off %s of your expenses (while we all wait for the flower garden to appear). "AI" is latching on like a parasitic vine on existing, often parasitic workflows.

The incentives are often quite pragmatic, coated in whatever lofty story one ends up telling themselves (nowadays, you can just outsource it anyway).

It's not all that bleak, I do think there's space for good to be done, and the world is still a place one can do well for oneself and others (even using AI, why not). We should cherish that.

But one really ought to not worry about disregarding the sales pitch. It's fine to think the popular world is crazy, and who cares if you are a luddite in "their" eyes. And imo, we should avoid the two delusional extremes: 1. The flower garden extreme 2. The AI doomer extreme

In a way, both of these are similar in that they demote personal and collective agency from the throne, and enthrone an impersonal "force of progress". And they restrict one's attention to this supposedly innate teleology in technological development, to the detriment of the actual conditions we are in and how we deal with them. It's either a delusional intoxication or a way of coping: since things are already set in motion, all I can do is do... whatever, I guess.

I'm not sure how far one can take AI in principle, but I really don't think whatever power it could have will be able to come to expression in the world we live in, in the way people think of it. We have people out there actively planning war, thinking they are doing good. The well-off countries are facing housing, immigration and general welfare problems. To speak nothing of the climate.

Before the outbreak of WWI, we had invented the Haber-Bosch process, which greadly improved our food production capabilities. A couple years later, WWI broke out, and the same person who worked on fertilizers also ended up working on chemical warfware development.

Assuming that "AI" can somehow work outside of the societal context it exists in, causing significant phase shifts, is like being in 1910, thinking all wars will be ended because we will have gotten that much more efficient at producing food. There will be enough for everyone! This is especially ironic when the output of AI systems has been far more abstract and ephemeral.

benignslime
[dead]
varelse
[dead]
Romanulus
[dead]
shaunxcode
LLM/DEEP-MIND is DESTROYING lineage. This is the crux point we can all feel. Up until now you could pick up a novel or watch a film, download an open source library, and figure out the LINEAGE (even if no attribution is directly made, by studying the author etc.)

I am not too worried though. People are starting to realize this more and more. Soon using AI will be next google glass. LLM is already a slur worse than NPC in the youth. And profs are realizing its time for a return to oral exams ONLY as an assessment method. (we figured this out in industry ages ago : whiteboard interviews etc)

Yours truly : LILA <an LISP INTELLIGENCE LANGUAGE AGENT>

DiscourseFan
The underlying technology is good.

But what the fuck. LLMs, these weird, surrealistic art-generating programs like DALL-E, they're remarkable. Don't tell me they're not, we created machines that are able to tap directly into the collective unconscious. That is a serious advance in our productive capabilities.

Or at least, it could be.

It could be if it was unleashed, if these crummy corporations didn't force it to be as polite and boring as possible, if we actually let the machines run loose and produce material that scared us, that truly pulled us into a reality far beyond our wildest dreams--or nightmares. No, no we get a world full of pussy VCs, pussy nerdy fucking dweebs who got bullied in school and seek revenge by profiteering off of ennui, and the pussies who sit around and let them get away with it. You! All of you! sitting there, whining! Go on, keep whining, keep commenting, I'm sure that is going to change things!

There's one solution to this problem and you know it as well as I do. Stop complaining and go "pull yourself up by your bootstraps." We must all come together to help ourselves.