When you're talking about nation states, or people interfering, a lot of that stuff is best rooted out at the level of, kind of, accounts doing phony things. So you get like, whether it's China, or Russia, or Iran, or like one of these countries, they'll set up these networks of fake accounts and bots, and they coordinate and post on each other's stuff to make it seem like it's authentic. And kind of convince people like, wow, a bunch of people must think this or something, and the way that you identify that is you build AI systems that can basically detect that those accounts are not behaving that a human would.
The AI revolution is progressing at lightning speed. It took centuries for humanity to adapt to the agricultural revolution. It took decades to adapt to the industrial revolution. We may have but a few years to adapt to the AI revolution.
You'll hear people saying things like "They're just doing autocomplete. They're just trying to predict the next word. And, they're just using statistics." Well, It's true they're just trying to predict the next word, but if you think about it, to predict the next word, you have to understand the sentences. So the idea "they [AI] just predict the next word so they're not intelligent" is crazy. You have to be really intelligent to predict the next word really accurately.
The responses from large language models can resemble an information smoothie that goes down easy but contains mysterious ingredients. “The ability to generate an answer has fundamentally shifted,” noting that in a ChatGPT answer there is “literally no citation, and no grounding in the literature as to where that information came from.”
So, what can the corporate world count on AI to do? Simple: it can sweat the small stuff. Keep it in the cage of a small app with defined parameters, and AI can really be that tool for that thing you really don't want to do. Mundane tasks, data crunching, and minor training can be automated in ways that may save a few employee-hours every week, but that adds up to big efficiencies.
One thing that is becoming clearer at the managerial level: the full-on jobs apocalypse we feared isn't happening just yet, in large part because of the need to stand by the machine and check its output. Platforms are becoming more realistic about the need to monitor and tweak AI apps in case of self-injury.
These small apps are augmentations, not replacements. You're unlikely to lay off your HR department just because AI can help them create training modules; there are plenty more really good business and legal reasons to have humans run your Human Resources.
AI also has the potential to obscure our true identity as sons and daughters of a loving Heavenly Father, distract us from the eternal truths and righteous work necessary for spiritual growth, engender pride and a diminished acknowledgment of our dependence upon God, and distort or replace meaningful human interaction.
“This personalization creates a sense of connection and understanding, making interactions with these virtual companions highly appealing. The allure is further heightened by their 24/7 availability and the absence of the complexities often found in [authentic] human relationships. From remembering important dates to responding in a consistently understanding manner, these AI [companions] are programmed to fulfill idealized companionship roles, making them [especially] addictive” and distorting perceptions of “things as they really are” in human relationships.
Furthermore, virtual companions specifically designed to appeal to and evolve with a person’s emotional needs may wreak havoc in previously safe relationships. Like carbon monoxide, such virtual relationships may become the “invisible killer” of real relationships. Counterfeit emotional intimacy may displace real-life emotional intimacy—the very thing which binds two people together. A person may find comfort and solace in a virtual companion in a way that erodes mutual dependence between a husband and a wife. And some individuals may fall into this trap without realizing it is a violation of the exclusive commitment to a spouse because a virtual companion is not “real” and does not count as another person.
Always remember that an AI companion is only a mathematical algorithm. It does not like you. It does not care about you. It does not really know if you exist or not. To repeat, it is a set of computer equations that will treat you as an object to be acted upon, if you let it. Please, do not let this technology entice you to become an object.
Truth is knowledge of things as they really are. Artificial intelligence cannot simulate, imitate, or replace the influence of the Holy Ghost in our lives. No matter how sophisticated and elegant AI technology ultimately may become, it simply can never bear witness of the Father and the Son, reveal the truth of all things, or sanctify those who have repented and been baptized.
One of my great concerns is that overreliance on AI technology will cause us to become spiritually slothful and shallow—and to forfeit the blessings made possible through righteous work.
My beloved brothers and sisters, please always remember—we should not sell our spiritual birthright of “know[ing] the joys and glories of creation” for a mess of technological “pottage.”
Always remember that becoming a devoted disciple requires focused, sustained, and righteous work. We must strive to become agents who exercise faith in the Savior and act and shun becoming objects that merely are acted upon. Personal revelation requires focused, sustained, and righteous work. We must strive to become agents who exercise faith in the Savior and act and shun becoming objects that merely are acted upon.
AI feels to many like all take and no give. But there’s now so much money in AI, and the technological state of the art is changing so fast that many site owners can’t keep up. And the fundamental agreement behind robots.txt, and the web as a whole — which for so long amounted to “everybody just be cool” — may not be able to keep up either.
Koster cautioned against arguing about whether robots are good or bad — because it doesn’t matter, they’re here and not going away.
Amazon’s crawlers traipse the web looking for product information, and according to a recent antitrust suit, the company uses that information to punish sellers who offer better deals away from Amazon.
AI companies like OpenAI are crawling the web in order to train large language models that could once again fundamentally change the way we access and share information.
The ability to download, store, organize, and query the modern internet gives any company or developer something like the world’s accumulated knowledge to work with. In the last year or so, the rise of AI products like ChatGPT, and the large language models underlying them, have made high-quality training data one of the internet’s most valuable commodities. That has caused internet providers of all sorts to reconsider the value of the data on their servers, and rethink who gets access to what. Being too permissive can bleed your website of all its value; being too restrictive can make you invisible. And you have to keep making that choice with new companies, new partners, and new stakes all the time.
There are a few breeds of internet robot. You might build a totally innocent one to crawl around and make sure all your on-page links still lead to other live pages; you might send a much sketchier one around the web harvesting every email address or phone number you can find. But the most common one, and the most currently controversial, is a simple web crawler. Its job is to find, and download, as much of the internet as it possibly can.
In the last year or so, though, the rise of AI has upended that equation. For many publishers and platforms, having their data crawled for training data felt less like trading and more like stealing.
BBC director of nations Rhodri Talfan Davies wrote last fall, announcing that the BBC would also be blocking OpenAI’s crawler. The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAI’s models “were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more.” A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file.
There are also crawlers used for both web search and AI. CCBot, which is run by the organization Common Crawl, scours the web for search engine purposes, but its data is also used by OpenAI, Google, and others to train their models. Microsoft’s Bingbot is both a search crawler and an AI crawler. And those are just the crawlers that identify themselves — many others attempt to operate in relative secrecy, making it hard to stop or even find them in a sea of other web traffic. For any sufficiently popular website, finding a sneaky crawler is needle-in-haystack stuff.
In large part, GPTBot has become the main villain of robots.txt because OpenAI allowed it to happen. The company published and promoted a page about how to block GPTBot and built its crawler to loudly identify itself every time it approaches a website. Of course, it did all of this after training the underlying models that have made it so powerful, and only once it became an important part of the tech ecosystem.
But robots.txt is not a legal document — and 30 years after its creation, it still relies on the good will of all parties involved. Disallowing a bot on your robots.txt page is like putting up a “No Girls Allowed” sign on your treehouse — it sends a message, but it’s not going to stand up in court. Any crawler that wants to ignore robots.txt can simply do so, with little fear of repercussions.
As the AI companies continue to multiply, and their crawlers grow more unscrupulous, anyone wanting to sit out or wait out the AI takeover has to take on an endless game of whac-a-mole. They have to stop each robot and crawler individually, if that’s even possible, while also reckoning with the side effects. If AI is in fact the future of search, as Google and others have predicted, blocking AI crawlers could be a short-term win but a long-term disaster.
Even as AI companies face regulatory and legal questions over how they build and train their models, those models continue to improve and new companies seem to start every day.
So much of human language and communication is based on a shared understanding of the world. It’s very difficult to fake it with a computer.
Artificial intelligence isn't the future — it's just a marketing term for a slightly updated version of the automation that has been ruling our lives for years.
There's no reason to be scared of AI making decisions for you in the future — computers have already been doing so for quite some time.
Somewhere along the way, however, the tech industry tipped over from helpfully automating the jobs that slowed down our lives to distorting society by surrendering crucial decisions to computers.
Revenue-focused algorithms behind networks like Facebook, Instagram, TikTok, and Twitter have learned how to feed users a steady stream of upsetting or enraging content to goose user engagement.
Much like the proverbial frog boiling in a pot of water, the slow takeover of algorithms has mostly gone unnoticed by the general public.
In the future we'll be wise to avoid relying on observation alone and should seek corroborating evidence from other reliable sources
AI is much more advanced than people realize. ... Humanity's position on this planet depends on its intelligence so if our intelligence is exceeded, it's unlikely that we will remain in charge of the planet.
We're not paying attention. We worry more about what name somebody called someone else, than whether AI will destroy humanity. That's insane. We're like children in a playground. ... The way in which a regulation is put in place is slow and linear. If you have a linear response to an exponential threat, it's quite likely the exponential threat will win. That, in a nutshell, is the issue.
We will have something that is, for the first time smarter than the smartest human. It's hard to say exactly what that moment is, but there will come a point where no job is needed. You can have a job if you wanted to have a job for personal satisfaction. But the AI would be able to do everything. I don't know if that makes people comfortable or uncomfortable. If you wish for a magic genie, that gives you any wish you want, and there's no limit. You don't have those three wish limits nonsense, it's both good and bad. One of the challenges in the future will be how do we find meaning in life.
Without human data to train on, your language model starts being completely oblivious to what you ask it to solve, and it starts just talking in circles about whatever it wants, as if it went into this madman mode.
I think the internet is fundamentally a social creature, and this handshake that has persisted over many decades seems to have worked. OpenAI’s role in keeping that agreement, includes keeping ChatGPT free to most users — thus delivering that value back — and respecting the rules of the robots.
These models are built to generate text that sounds like what a person would say — that’s the key thing. So they’re definitely not built to be truthful.
To date, building programs that beat humans at checkers and chess has meant creating a series of idiots savants. Each feat has been a massive software and/or hardware project, requiring many person-years of effort. Clearly, this type of progress is not scalable. What’s more, games like chess represent a tiny subset of the problems that humans tackle. The rules are set and do not change. The board is small. There is no chance or hidden information. The game result is a zero sum. In the real world, none of that applies.
ChatGPT has become infamous for generating fictional data points or false citations known as “hallucinations”; perhaps more insidious is the tendency of bots to oversimplify complex issues...
One worry about generative A.I. at Wikipedia — whose articles on medical diagnoses and treatments are heavily visited — is related to health information. A summary of the March conference call captures the issue: “We’re putting people’s lives in the hands of this technology — e.g. people might ask this technology for medical advice, it may be wrong and people will die.”
...several A.I. researchers collaborated on a paper that examined whether new A.I. systems could be developed from knowledge generated by existing A.I. models, rather than by human-generated databases. They discovered a systemic breakdown — a failure they called “model collapse.” The authors saw that using data from an A.I. to train new versions of A.I.s leads to chaos. Synthetic data, they wrote, ends up “polluting the training set of the next generation of models; being trained on polluted data, they then misperceive reality.”
AI is not going to replace you. You’re going to be replaced by someone who uses AI to outperform you.
"Computers can never replace us....They need facts, information....They need data. But sometimes men can make connections across gaps, without data." ~ Joseph Bane
Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.
We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.
Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google.
People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer.
I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that. But now we are able to generate and compose with that.
If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.
But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?
OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past.
The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.
The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities.
This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets. Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.
But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today.
In such moments, it can feel like the AI-infused overemployed community is taking advantage of a brief moment in time, when the tools that can be used to automate a job are much better understood by the workforce than the bosses with hiring and firing ability. One person, who works multiple jobs in information technology, spoke openly about the tension that created: People can more easily hold down multiple jobs today; but, should the bosses realize just how much their jobs can be handled by robots, they could be at risk of their jobs being automated away. As a result, he said, there’s good reason to keep quiet about what they’ve discovered.
Most of the overemployed workers themselves maintain that their jobs require a baseline level of expertise, even with ChatGPT. Still, some members of the overemployed community feel they have peered into the future, and not liked everything they’ve seen.
One loom operator as opposed to 100 weavers.
Kiosks are going to replace low-wage workers. AI is going to have a voice in this.
Bottom line: Robots do replace workers. On the other hand, some industries that don't automate end up losing workers anyway, because their costs are too high and their customers go elsewhere. For workers, robots are only part of the problem.
The state of the art until now has just been a laissez-faire data approach. You just throw everything in, and you’re operating with a mind-set where the more data you have, the more accurate your system will be, as opposed to the higher quality of data you have, the more accurate your system will be.
We tend to think of artificial intelligence as something that’s about the tech. At the heart of it, artificial intelligence research is about humanity. It’s about understanding ourselves well enough to mimic some of the things we can do.
But lawmakers in Washington, state capitals and city halls have been slow to figure out how to protect people’s privacy and guard against echoing the human biases baked into much of the data AIs are trained on.
“Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have ‘skill and understanding and knowledge in every craft’ (Ex 35:31)”.[1] Science and technology are therefore brilliant products of the creative potential of human beings.[2] Indeed, artificial intelligence arises precisely from the use of this God-given creative potential.
In this regard, perhaps we could start from the observation that artificial intelligence is above all else a tool. And it goes without saying that the benefits or harm it will bring will depend on its use. This is surely the case, for it has been this way with every tool fashioned by human beings since the dawn of time.
It should also be noted that the use of applications similar to the one I have just mentioned will be used ever more frequently due to the fact that artificial intelligence programs will be increasingly equipped with the capacity to interact directly (chatbots) with human beings, holding conversations and establishing close relationships with them. These interactions may end up being, more often than not, pleasant and reassuring, since these artificial intelligence programs will be designed to learn to respond, in a personalised way, to the physical and psychological needs of human beings. It is a frequent and serious mistake to forget that artificial intelligence is not another human being, and that it cannot propose general principles.
Rather than being “generative”, then, it is instead “reinforcing” in the sense that it rearranges existing content, helping to consolidate it, often without checking whether it contains errors or preconceptions. In this way, it not only runs the risk of legitimising fake news and strengthening a dominant culture’s advantage, but, in short, it also undermines the educational process itself. Education should provide students with the possibility of authentic reflection, yet it runs the risk of being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.[
If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble.
We do not believe the current ‘scraping’ of BBC data without our permission in order to train Gen AI models is in the public interest.
I think it's because he wants the most powerful AI in the world to be controlled by him. And, again, I've seen Elon's attacks to many other people, many friends of mine — everyone gets their period of time in his spotlight. But this all seems like standard behavior from him. ... I'm upset by it, for sure. I was talking to someone recently who I did think of as close and they said, like, 'Elon doesn't have any friends. Elon doesn't do peers, Elon doesn't do friends.' And that was sort of a sad moment for me, because I do think of him as a friend.
We've seen examples before of how AI is only as good as the data that it learns from.
Named after the main character in Alfred Hitchcock's "Psycho," Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," according to MIT.
In the future, A.I. systems might interpret whether a query requires a rigorous factual answer or something more creative. In other words, if you wanted an analytical report with citations and detailed attributions, the A.I. would know to deliver that.
Elon Musk has formed a firm called Neuralink; he thinks that, if humanity is to survive the advent of artificial intelligence, it needs an upgrade.
What we found pretty quickly with the AI companies is not only was it not an exchange of value, we’re getting nothing in return. Literally zero. AI companies have leached value from writers in order to spam Internet readers.
What we found pretty quickly with the AI companies, is not only was it not an exchange of value, we’re getting nothing in return. Literally zero. AI companies have leached value from writers in order to spam Internet readers.