Outline

  1. I’ve been reading jurassic park again (did you know it was a book?), and some passages from Ian Malcolm about jurrasic park are almost directly applicable to AI.
    1. What are the themes of jurassic park? Some crazy desire to throw money at a problem an create unforgettable experiences for others can have bad consequences. I can digest a lot more of these themse but I really want to look at the specific passages where Ian Malcolm talks about “Thintelligence” and how “knowledge is like inherited wealth”. The whole knowledge is inherited wealth is very interesting to me because artificial general intelligence will basically be giving everyone all the information they might need about stuff. But the scary part is that AGI isn’t always right. We all know it can hallucinate. What’s horrifying though is that people think they will have all of this knowledge from AGI and try to start using it, but they don’t have the experiences that come from learning what the AGI has taught them. “When a master at karate knows how to kill someone, they rarely use it”. AGI will help people achieve certain goals, but what happens when AGI is helping people build life critical systems, and the code becomes unmaintainable. ??
  2. Everyone has already seen the meme “we were so busy wondering if we could do it, we didn’t stop to think if we should do it”. That’s a great clip from the movie, but the book dives into a much deeper explaination of what science is and the value of experience, that AI will never have.
  3. My predictions? Eventually people will be trusting AI too much. We’ve already seen it with AI hallucinating court cases that become cited, the AI generated image of princess kate, etc. This is just the start. In the book, the groundskeepers didn’t know how the power of the animals, so extra safety measures needed to be installed (like bars on the windows). Poeple may be realizing what AI is good at, but what happens when the AI starts “breeding?” like the raptors did in the book? In the book, the scientists didn’t realize the animals were breeding because they only tested the happy path (i.e check that 243 animals are in the park). If there are more animals, then the test passes. This was a naive test of course, and I’m sure people at openAI have more rigorous testing. But do we even know what a Jurassic park senario looks like with AI? At one point, there could be a big blunder with AI, imagine our national security because AI has made a mistake. This could be equivalent to dennis nedry sabatoging the park and turning off the power. When it seems order is restored, all things can get much worse, much faster. In the book, power seems to be restored, but it turns out that only auxiliary power was restored. The electrical fences don’t run on auxilary power, so the raptors got out. In an AI world, some national security check done by AI is replaced with humans, and order is quickly restored because some on call engineer is quick to respond. What would a human engineer replacing AI because of a bug look like? But what if those humans cannot perform the tasks of AI as quickly? Maybe they start to miss a very clear targeted attack that an AI would detect under normal circumstances? I think eventually, the raptors will get into our visitor center, and there will be mayhem. People will become very wary of using AI for any critical tasks, but not other stuff, idk. And then, someone will throw a bunch of money at AI again in 20 years, and say they fixed all the problems. Some andromimous rex AI will be created and we will repeat thte cycle again.

Also draw the parallel to the fact that, Richard hammond chose to develop the park on isla nublar because there were no regulations! Are there any regulations on AI currently?

https://www.w3.org/reports/ai-web-impact/ <- maybe read this and reference it potentially?

https://arstechnica.com/gadgets/2024/04/fake-ai-law-firms-are-sending-fake-dmca-threats-to-generate-fake-seo-gains/

Outline v2

  1. Intro. I’ve been reading jurassic park again. Was cravin just a good sci-fi book. TLDR of the jurassic park theme is the classic “you were busy with what you could do, you didn’t stop to think if you should do it. That got me thinking about AI.
  2. Parallels of how AI us being developed now and how dinosaurs were developed in the book
    • DNA vs. AI (lots of simalarities)
    • No regulation on the island
    • It’s a sort of status symbol, people love it
    • We are testing AI in only certain features (what is it doing that we don’t know about yet (i.e. using random DNA led to breeding). There is literally nothing we can directly compare AI with, besides a brain, and we still don’t quite understand a brain either. In the book, they try to have zookeepers from big game reserves tending to the dinosaurs, and they don’t even know what to do.
    • There will be safeguards, but how should we restart the system.
    • We are also hearing now that AI development is being rushed. In the book, richard hammond rushes the development and wants to open the park ASAP. The lead researcher, Henry Wu, said that he wanted to develop the dinasoaurs more, so that they were not as aggressive and more trainable.
  3. Isolate why we are trying to develop AI. How should it be used vs. how will people end up using it?
    • People should be using it as a tool, as a starting point. to formulate ideas of what might be possible.
    • People will end up using it to be some fully atomic thing that can make decisions for some user that are not double checked.
      (with AI you have a tool, but AI cannot teach you how to take an idea and make something out of it). You can set up a website with AI, but then what are you going to do next? You haven’t learned anything about setting it up, and you don’t know yet how to market it, or to who.
  4. If we draw an exact parallel, let’s predict what will happen soon with AI. AI are the dinosaurs. Researchers have been developing it, they have been monitoring it, but they are only monitoring specific things. Potentially they have missed that “the AI is breeding”. (but that’s not even the worst part). Some event causes the AI safeguards to be brought down (don’t tell me that’s not possible). Then what?
  5. My prediction for how this AI thing will go.
    • There is still so little regulation
    • Many jobs will be replaced because of AI, and at one point, a large flaw will be exposed in how these models work. A flaw exploitable by anybody. Companies will have to race to replace whatever AI they have in their system with humans, but at that point, no one will have the skills or any reason to work in a position previously done by AI (think fast food worker, AI replaces a majority of the positions, someone can exploit some kind of sandbox hack, giving them free meals. This hack spreads on social media, cannot be fixed quickly/easily, need to bring humans back in, so human wants minimum wage anymore).

Movie idea

  • everyone has a personal AI assistant
  • Every AI assistant has some state associated with the User (age, current lifestyle habits & personality, Income, Recent history)
  • In order to save costs, people are bucketed to different AIs (almost 100 per person).
  • But it is a global company so the AI assistant is replicated across regions, also since people are across multiple regions
  • We still need to synchronize each AI assistant, so there are synchronization protocols
  • What hapens next is power outages, and when the state synchronization needs to happen again, certain AI assistants hallucinate acknowledgments, thinking they become the master
  • This leads to a number of untracked AI assistants that don’t have customers
  • When diagnostics are run, they are not picked up, because it is assumed every AI has a customer.

Basically, the idea is that what happens is everybody has like a personal AI assistant, but the state of the AI assistant for personal reasons and the history of the assistant is always stored like globally on some server, but the server has to be replicated in case there’s power outages or in case the people move and you always want The state of the AI assistant to be closer to latency blah blah blah blah blah blah blah blah blah blah. The problem arises then when for some reason, the power goes out in one or more of these replicated areas and when the state of the AI needs to Synchronize itself again they hallucinate and they all become their own AI and this is for every single person who has one of these AI.

It ends up happening is people notice these anomalies but nothing seems out of place because when they go run diagnostics, they running diagnostics checking specifically for the AI that have been created via accounts. because they’re only expecting this many AI but when they start running like someone

eventually someone finds out there are a lot more AIs and the system wasn’t looking for these and these AI are basically self acting and they’re not they think they’re their own cells and they’re not attached to some account anymore because they hallucinated themselves into existence. Sent from my iPhone