The responses from large language models can resemble an information smoothie that goes down easy but contains mysterious ingredients. “The ability to generate an answer has fundamentally shifted,” noting that in a ChatGPT answer there is “literally no citation, and no grounding in the literature as to where that information came from.”
So, what can the corporate world count on AI to do? Simple: it can sweat the small stuff. Keep it in the cage of a small app with defined parameters, and AI can really be that tool for that thing you really don't want to do. Mundane tasks, data crunching, and minor training can be automated in ways that may save a few employee-hours every week, but that adds up to big efficiencies.
These small apps are augmentations, not replacements. You're unlikely to lay off your HR department just because AI can help them create training modules; there are plenty more really good business and legal reasons to have humans run your Human Resources.
One thing that is becoming clearer at the managerial level: the full-on jobs apocalypse we feared isn't happening just yet, in large part because of the need to stand by the machine and check its output. Platforms are becoming more realistic about the need to monitor and tweak AI apps in case of self-injury.
So much of human language and communication is based on a shared understanding of the world. It’s very difficult to fake it with a computer.
Somewhere along the way, however, the tech industry tipped over from helpfully automating the jobs that slowed down our lives to distorting society by surrendering crucial decisions to computers.
There's no reason to be scared of AI making decisions for you in the future — computers have already been doing so for quite some time.
Artificial intelligence isn't the future — it's just a marketing term for a slightly updated version of the automation that has been ruling our lives for years.
Much like the proverbial frog boiling in a pot of water, the slow takeover of algorithms has mostly gone unnoticed by the general public.
Revenue-focused algorithms behind networks like Facebook, Instagram, TikTok, and Twitter have learned how to feed users a steady stream of upsetting or enraging content to goose user engagement.
Without human data to train on, your language model starts being completely oblivious to what you ask it to solve, and it starts just talking in circles about whatever it wants, as if it went into this madman mode.
These models are built to generate text that sounds like what a person would say — that’s the key thing. So they’re definitely not built to be truthful.
To date, building programs that beat humans at checkers and chess has meant creating a series of idiots savants. Each feat has been a massive software and/or hardware project, requiring many person-years of effort. Clearly, this type of progress is not scalable. What’s more, games like chess represent a tiny subset of the problems that humans tackle. The rules are set and do not change. The board is small. There is no chance or hidden information. The game result is a zero sum. In the real world, none of that applies.
ChatGPT has become infamous for generating fictional data points or false citations known as “hallucinations”; perhaps more insidious is the tendency of bots to oversimplify complex issues...
...several A.I. researchers collaborated on a paper that examined whether new A.I. systems could be developed from knowledge generated by existing A.I. models, rather than by human-generated databases. They discovered a systemic breakdown — a failure they called “model collapse.” The authors saw that using data from an A.I. to train new versions of A.I.s leads to chaos. Synthetic data, they wrote, ends up “polluting the training set of the next generation of models; being trained on polluted data, they then misperceive reality.”
One worry about generative A.I. at Wikipedia — whose articles on medical diagnoses and treatments are heavily visited — is related to health information. A summary of the March conference call captures the issue: “We’re putting people’s lives in the hands of this technology — e.g. people might ask this technology for medical advice, it may be wrong and people will die.”
Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.
One loom operator as opposed to 100 weavers.
In such moments, it can feel like the AI-infused overemployed community is taking advantage of a brief moment in time, when the tools that can be used to automate a job are much better understood by the workforce than the bosses with hiring and firing ability. One person, who works multiple jobs in information technology, spoke openly about the tension that created: People can more easily hold down multiple jobs today; but, should the bosses realize just how much their jobs can be handled by robots, they could be at risk of their jobs being automated away. As a result, he said, there’s good reason to keep quiet about what they’ve discovered.
Most of the overemployed workers themselves maintain that their jobs require a baseline level of expertise, even with ChatGPT. Still, some members of the overemployed community feel they have peered into the future, and not liked everything they’ve seen.
Kiosks are going to replace low-wage workers. AI is going to have a voice in this.
Bottom line: Robots do replace workers. On the other hand, some industries that don't automate end up losing workers anyway, because their costs are too high and their customers go elsewhere. For workers, robots are only part of the problem.
The state of the art until now has just been a laissez-faire data approach. You just throw everything in, and you’re operating with a mind-set where the more data you have, the more accurate your system will be, as opposed to the higher quality of data you have, the more accurate your system will be.
We tend to think of artificial intelligence as something that’s about the tech. At the heart of it, artificial intelligence research is about humanity. It’s about understanding ourselves well enough to mimic some of the things we can do.
We've seen examples before of how AI is only as good as the data that it learns from.
In the future, A.I. systems might interpret whether a query requires a rigorous factual answer or something more creative. In other words, if you wanted an analytical report with citations and detailed attributions, the A.I. would know to deliver that.
Elon Musk has formed a firm called Neuralink; he thinks that, if humanity is to survive the advent of artificial intelligence, it needs an upgrade.
What we found pretty quickly with the AI companies is not only was it not an exchange of value, we’re getting nothing in return. Literally zero. AI companies have leached value from writers in order to spam Internet readers.