Teaching for Tomorrow: Unpacking AI for Teaching and Learning
Teaching for Tomorrow: Unpacking AI for Teaching and Learning
Develop a working understanding of AI
Learn how/where to access AI
Generate ideas to use in classroom tomorrow
▶️ Your Undivided Attention: The AI Dilemma
Major Takeaways:
Half of AI researchers believe there's a 10% or greater chance that humans will go extinct from their inability to control AI. When we invent a new technology, we uncover a new class of responsibility. If that technology confers power, it will start a race - and if we don’t coordinate, the race will end in tragedy.
Humanity’s ‘First Contact’ moment with AI was social media - and humanity lost. We still haven’t fixed the misalignment caused by broken business models that encourage maximum engagement. Large language models (LLM) are humanity’s ‘Second Contact’ moment, and we’re poised to make the same mistakes.
Guardrails you may assume exist actually don’t. AI companies are quickly deploying their work to the public instead of testing it safely over time. AI chatbots have been added to platforms children use, like Snapchat. Safety researchers are in short supply, and most of the research that’s happening is driven by for-profit interests instead of academia.
The media hasn’t been covering AI advances in a way that allows you to truly see what’s at stake. We want to help the media better understand these issues. Cheating on your homework with AI or stealing copyrighted art for AI-generated images are just small examples of the systemic challenges that are ahead. Corporations are caught in an arms race to deploy their new technologies and get market dominance as fast as possible. In turn, the narratives they present are shaped to be more about innovation and less about potential threats. We should put the onus on the makers of AI - rather than on citizens - to prove its danger.
▶️ Your Undivided Attention: AI Myths and Misconceptions
Major Takeaways:
AI Myth 1: The net good outweighs the net bad. AI can potentially help solve some complex societal problems. However, if those solutions land in a broken, dysfunctional society, how many of them can be realized?
AI Myth 2: The only way to get safe AI products is by testing and deploying AI as quickly as possible into society. It's one thing to test these AI systems with real people; it's another to quickly bake immature technologies into fundamental social infrastructure right away, which quickly creates economic dependencies.
AI Myth 3: We can’t afford to pause or slow down. This is a race, and we need to stay ahead of China. This shouldn’t be a race to recklessly deploy AI as fast as possible. It should be a race to determine who can safely harness AI within their society. The overzealous AI race happening in the West is actually helping China move faster and catch up to the United States.
AI Myth 4: We shouldn’t worry about AI because it’s ‘just a tool.’ Conventional tools don’t have the ability to run in an autonomous loop. Today, anyone can give GPT-4 a goal and it can make and execute a plan on its own, creating opportunities for potential societal chaos. And because they are autonomous AI agents, it won’t be possible to hold them accountable for downstream effects.
AI Myth 5: The biggest threats from AI stem from the bad actors abusing AI, not the AI itself. No doubt there will be some bad actors, but the biggest risk comes from normal, average uses of AI to speed up everyday processes that are creating harms within our societies and for our planet. This raises the question of whether we can align AI with good outcomes, since it’s landing in the misaligned system of late stage capitalism. With capitalism being supercharged by AI, it’s going to supercharge the existing misalignment of that system.
▶️ Your Undivided Attention: Inside the First AI Insight Forum in Washington