Curriculum
Course: Money
Login

Curriculum

Money

Video lesson

Transhumanism AI on Money

Rethinking Our Path: Future of Work and Humanity in 2030

  • “By 2030 the four-day work week will be widespread but not ubiquitous.”
  • “In the coming generations, we will learn how to engineer bodies and brains and minds.”
  • “Those who control the data control the future, not just of humanity, but the future of life itself.”

 

The Human Condition: Disruption Meets Designer-Evolution

As we hurtle toward 2030, the landscape of our professional lives hangs in the balance of rapid technological advancement and social evolution. Will we become more efficient workers, or will our jobs morph into something entirely unrecognizable? A report on the future of work suggests that we may soon embrace a four-day work week while navigating exciting digital realities like metaverse workspaces. Yet beyond mere occupational shifts lies an existential question: as technology advances, how will we maintain our human essence? This lesson explores these vital intersections, illuminating how our collective future demands we rethink work, identity, and the ownership of data, especially within the realms of Finance, Technology, and evolving structures in cryptocurrency ecosystems.

 

Tech in 2030: The Workplace Revolution

The report, Tech in 2030, posits that by 2030, the traditional work structure will witness a plethora of transformative changes. The call for a four-day work week has intensified, spurred by businesses competing for top talent. Coupled with automation—largely driven by AI and robotics—this shift promises to create new job categories focused on overseeing these transformative technologies, such as robot handlers and AI directors. Traditional offices are on the brink of transition, set to be replaced by immersive metaverse workspaces, where tools like AR, VR, and collaboration technologies will redefine our professional interactions.

While the report offers an optimistic view of the future, raising concerns around job displacement gives rise to a more profound question: how will these changes impact our human identity and societal structures?

 

Critical Analysis of the Future Workplace

Strengths of the Argument

The potential for a four-day work week is not just wishful thinking; it addresses several compelling sociocultural and economic trends.

  1. Employee Wellbeing and Efficiency: The idea that companies are shifting toward a four-day work week to enhance work-life balance is not merely aspirational. Research has shown that reduced working hours can lead to increased productivity and overall employee satisfaction. Furthermore, countries like Iceland have piloted similar schemes with promising results, providing compelling examples of this trend in practice.

  2. Augmented Roles for Humans: The report insightfully emphasizes that automation will augment rather than replace jobs. It positions humans alongside AI, necessitating a paradigm shift in how we view our roles in the workplace. As technology continues to evolve, there will be a need for skilled workers who can manage and work symbiotically with AI and robotics—a reality already being witnessed in various sectors.

  3. Technological Engagement in Workspaces: The transition of traditional work into digital metaverse realms may promote social connections among workers who increasingly prioritize remote working options. These immersive environments could drive innovation through gamified learning experiences and collaborative tools, offering employees opportunities to engage creatively in their work.

The optimism presented in the report is tempered by essential challenges that warrant scrutiny.

Limitations of the Argument

  1. The Disparity of Access and Adoption: While the report posits a widespread adoption of these technologies, significant disparities exist in technology access and education. Not all industries may adapt equally to this digital shift. For instance, sectors reliant on in-person interaction may struggle to integrate these new forms of work smoothly, leading to potential job insecurity within those ranks.

  2. Impacts of Automation on Employment: While automation may create new jobs, it is crucial to acknowledge the transitional phase workers will face. A considerable portion of the workforce still relies on jobs that may become obsolete in the upcoming decades. The report underestimates the complexities involved in ensuring that employees are equipped with the necessary skills to thrive in this technologically augmented world.

  3. Evolving Definition of Human Identity: The report hints at a reality where traditional human experiences may not remain sacrosanct, raising questions about whether the integration of technology minimizes or enhances our human identity. As we immerse ourselves in digital workspaces, how do we ensure that human emotions, creativity, and critical thought remain intact?

This exploration of the future of work invites a necessary discourse around technology’s double-edged sword; even as digital innovations promise conveniences, they concurrently introduce challenges concerning access, equity, and identity.

 

Linking to Cryptocurrency and Blockchain

As we consider the evolving structure of work, it becomes imperative to contemplate how blockchain and cryptocurrency might transform these emerging work patterns.

  1. Decentralized Work Models: Blockchain technology supports decentralized finance and governance systems, fostering transparency and collaboration in organizational structures. In a future where remote work dominates, decentralized models could reduce hierarchical barriers, empowering individuals to engage more robustly in their work.

  2. Data Ownership and Regulation: Control over data will be critical as we navigate this brave new workplace. With blockchain, data ownership becomes transparent and secure, allowing individuals to maintain authority over their information while engaging in digital workspaces. A focus on data sovereignty may catalyze enhanced privacy protections, spurring conversations about who retains ownership of biometric data generated within these metaverse environments.

  3. Innovative Approaches in Decentralized Finance (DeFi): The lessons from the evolving future of work can also impact the DeFi landscape. A four-day work week could herald an increase in freelancing and gig economy roles where individuals leverage decentralized platforms to manage and track their earnings. Blockchain can facilitate peer-to-peer transactions and democratize financial access, aligning with the personalized work needs of this new paradigm.

Broader Implications and Future Outlook

As we gaze into the future of work, the implications stretch far beyond the workplace.

Shaping Financial Perspectives: These changes point towards a more fluid workplace where the lines between personal identity and professional engagement blur. As we redefine our approaches to work, industries such as finance must remain agile in responding to workforce shifts and lay the groundwork for new economic models centered around decentralized principles.

Potential Societal Impact: Embracing digital interactions could significantly alter community engagement cravings. While technology may offer convenience, it is critical to ponder whether digital workspaces foster authentic connection or isolate individuals in virtual domains.

Increased automation and personalized workplaces genuinely promise efficiency. Still, these changes necessitate comprehensive conversations around human-centered design, ensuring that as we integrate digital innovations, we remain aware of their long-term societal implications.

Personal Commentary and Insights

From my perspective, the future of work is not simply a technological question; it also challenges our moral and philosophical foundations. As we redefine our interactions with technology, we must also contemplate what it means to be human in increasingly virtual contexts.

I acknowledge that the shift towards a four-day work week may not solely ride on corporate acceptance but could equally depend on cultural acceptance surrounding work and productivity. If we redefine support—prioritizing well-being while still achieving results—we may, in fact, foster a healthier, happier populace professionally and personally.

Conclusion

As we march toward 2030, the evolving nature of work, technology, and humanity prompts urgent discussions that are both complex and impactful. The potential for an accelerated four-day work week and innovative workplace technologies sparks fascinating debates regarding identity, labor structures, and social cohesion. Moreover, juxtaposing these advancements with cryptocurrency and blockchain technologies foregrounds pathways toward a more equitable future framework where innovation thrives.

Embracing these challenging yet promising ideas equips us to adapt boldly and creatively to our forthcoming reality in the Crypto is FIRE (CFIRE) training program.

Continue to Next Lesson

Let us stride forward as we explore the next chapter of our transformative journey together, unraveling new insights and integrating innovation into our pathways ahead.

 

 

The Future of Work: An Insight into Technology and Its Impact on Our Lives by 2030

As we peer into the crystal ball of the future, we find a world that’s rapidly transforming under the influence of technology. The journey from traditional work frameworks to a more automated and interconnected future is set to reshape our lives by 2030. This lesson dives deep into the projections about work, technology, and the implications for both businesses and individuals, offering a lens into its relevance, particularly in the realm of cryptocurrencies and blockchain technology. Understanding these trends is vital for anyone eager to navigate the changing financial landscape, as it may set the stage for new opportunities in digital economies.

Core Concepts

  1. Four-Day Work Week: Traditionally, the five-day work week has been the norm, but the rise of remote work and employee wellness has led to increasing calls for a four-day model. In the crypto space, companies like Bitwage are pioneering flexible work schedules, emphasizing productivity over time spent.

  2. Automation: In finance, automation improves efficiency and accuracy. In crypto, automated trading bots use algorithms to execute trades at optimal times, highlighting the intersection of technology and financial markets.

  3. AI and Robotics: While AI traditionally assists humans in tasks, its integration promises to create roles such as the “AI director” in the future workplace. In the world of crypto, smart contracts serve as a form of AI, executing agreements automatically when conditions are met.

  4. Metaverse: The concept of a digital universe allows for collaboration and social interaction beyond the physical world. In crypto, the metaverse often involves virtual economies where users trade NFTs and digital assets.

  5. Technological Augmentation: This refers to enhancing human abilities through technology. In finance, algorithmic trading augments human decision-making. In the crypto arena, decentralized finance (DeFi) platforms enhance traditional financial operations, removing intermediaries.

  6. Collaboration Tools: They enable teamwork from diverse locations. With crypto, platforms like Discord or Telegram facilitate community engagement and project management in decentralized projects.

  7. Digital Twins: A virtual representation of physical assets. In finance, digital twins can model financial performance and risk. In crypto, they help visualize blockchain performance and security measures.

Understanding these concepts is critical for newcomers as they serve as a foundation to grasp the broader implications of technology on future work environments and its financial entities like cryptocurrencies.

 

Key Sections

1. The Shift Towards a Four-Day Work Week

  • A growing trend led by companies competing for talent.
  • Remote work has accelerated the push for flexible schedules.
  • Employee morale and productivity have become central in work discussions.

With post-pandemic adjustments, companies are incentivizing a work-life balance that prioritizes employee well-being. This leads to higher job satisfaction and increased productivity, as many sectors demonstrate that less can indeed be more. The emergence of remote work technologies enables this transition smoothly and opens questions about future labor policies.

2. Automation and Job Transformation

  • Automation supports current jobs without mass displacement.
  • New roles in managing robotic technology will be essential.
  • The fear of job loss contrasts with the need for tech-savvy human roles.

In the crypto sector, automation through blockchain can support various jobs while eliminating repetitive tasks, allowing employees to focus on strategy and innovation rather than mundane activities. Understanding and embracing this transformation is crucial for current and future workers.

3. AI and Robotics: The New Coworkers

  • The integration of AI and robotics reshapes job descriptions.
  • Human roles evolve rather than disappear.
  • Enhanced productivity through technological augmentation.

In finance, AI technologies are revolutionizing trading methodologies, making them more data-driven and efficient. Similarly, in the crypto realm, innovations like decentralized applications (dApps) are reshaping traditional processes, marrying human oversight and AI efficiency—a crucial synergy for further growth.

4. The Emergence of Data-Driven Workplace Cultures

  • Emphasis on data analytics for decision-making.
  • Overreliance on analytics can distort human judgment.
  • Ethical considerations in data management.

As data becomes a central currency, companies in both traditional finance and crypto need to prioritize transparency. Cryptocurrencies often leverage blockchain for secure data management, highlighting both the power and the responsibilities tied to data ownership.

5. The Role of the Metaverse in Work Environments

  • Virtual workspaces changing how teams interact.
  • Opportunities for gamified learning and collaboration.
  • Companies adopting digital twin technologies for real-world application simulations.

The metaverse is a hot topic in the crypto community. Projects creating virtual assets and experiences allow users to earn through participation, creating brand new economies and driving specific innovations in blockchain technology.

The Crypto Perspective

Automation and Work: The Crypto Connection

In traditional finance, automation enhances efficiency, while in the crypto space, it facilitates smart contract execution and efficient transaction processing. This not only offers cost savings but also introduces trust in peer-to-peer transactions without intermediaries.

When exploring AI’s role, consider how crypto employs machine learning to analyze market trends. Such insights help investors make informed decisions. Blockchain technology ensures that these algorithms operate transparently, as all transactions are recorded on a public ledger.

The Metaverse: A New Economy

As companies expand into the metaverse, they create digital ecosystems. Cryptocurrencies, such as Decentraland’s MANA, integrate seamlessly as the medium of exchange for virtual goods, showcasing a direct link between evolving work environments and the crypto economy.

Examples

While the transcript didn’t reference specific visual aids, it’s vital to think about how adaptation of charts could portray a transition from traditional job structures to new roles in technology and crypto markets. For instance, a comparison of employment types from 2020 to projected roles in 2030 could include emerging job fields like “blockchain engineer” and “robot overseer”.

In hypothetical scenarios, consider:

  1. Scenario A: A software engineer moving from a traditional company to a startup operating entirely in the metaverse, using cryptocurrencies for daily transactions.
  2. Scenario B: A financial analyst leveraging an AI-powered analysis tool that automatically trades cryptocurrencies based on real-time data, enhancing their efficiency.

Real-World Applications

Historically, technological shifts have always led to job evolution, with roles adapting to new tools. By reflecting on past transformations (ex: the Industrial Revolution), we can draw parallels to today’s technological advancements, where upskilling and adaptability become crucial.

In the crypto ecosystem, real-world examples show how companies utilize blockchain for transparent contracts, co-working spaces adapt to remote considerations, and communities form around decentralized projects—all creating new job opportunities and financial ecosystems.

Cause and Effect Relationships

The drive for a four-day work week results from changing cultural attitudes towards work-life balance, paralleling the rise of automation which both supports and reshapes job functions in the finance sector. This directly impacts the crypto market as more professionals seek alternatives and greater flexibility, leading to new types of projects emerging in decentralized finance and labor.

Challenges and Solutions

Challenges include resistance to change, misunderstanding of technology, and ethical concerns regarding data usage. In crypto, navigating the regulatory landscape poses similar hurdles. However, blockchain technology can bring transparency and trust, providing solutions that weren’t available in traditional finance.

For newcomers, addressing misconceptions—like categorizing cryptocurrencies solely as speculative investments—opens doors to understanding their utility, particularly in the evolving workplace.

Key Takeaways

  1. The four-day work week is a response to changing employee expectations and is a trend worth watching.
  2. Automation is complimentary, evolving job roles rather than outright replacement.
  3. AI and robotics will streamline traditional roles while creating new opportunities in emerging industries.
  4. The metaverse is changing workplace dynamics, offering new collaborative tools and experiences.
  5. Data is becoming increasingly important; understanding data management is key to thriving in a tech-driven economy.
  6. Embracing new technological tools is essential for maintaining competitive advantage, especially in fast-evolving sectors like crypto.
  7. Understanding the intersection of traditional finance and crypto can yield valuable insights for future career opportunities.

Discussion Questions and Scenarios

  1. How might businesses effectively implement a four-day work week while maintaining productivity?
  2. What role should governments play in managing the impact of automation on employment?
  3. Compare the application of AI in traditional finance versus the crypto space. What are the key differences?
  4. Discuss the ethical implications of data ownership in both traditional work environments and blockchain.
  5. How can the metaverse create new opportunities for collaboration in workplaces?
  6. What technological skills should employees focus on to thrive in the future job market?
  7. Predict the potential conflicts between employee rights and digital surveillance in tomorrow’s workplaces.

Glossary

  • Four-Day Work Week: A reduced work schedule that allows employees to work less than the traditional five days.
  • Automation: The use of technology to perform tasks without human intervention, often leading to increased efficiency.
  • AI: Artificial Intelligence, machines capable of performing tasks that typically require human intelligence.
  • Metaverse: A shared virtual space where users can interact in real-time, often through avatars.
  • Blockchain: A decentralized digital ledger that records transactions across many computers securely.
  • Smart Contracts: Self-executing contracts with terms directly written into code, facilitating automated agreements.

By delving into the future of work, and its intersection with emerging technologies, learners can better navigate the landscape shaped by these innovations. This lesson is just one aspect of the broader Crypto Is FIRE (CFIRE) training program, where knowledge and application of these concepts can lead directly to opportunities in the burgeoning cryptocurrency and blockchain sectors.

Continue to Next Lesson

In your financial exploration journey, remember that adaptability, continuous learning, and embracing technological advancements will open doors to innovation and opportunity. Keep moving forward and look forward to the next lesson in the CFIRE program!

 

 

 

 

Read Video Transcript
The Future of Work – Tech in 2030
https://www.youtube.com/watch?v=q4MrL9wuVzo
Transcript:
 The world will look very different by 2030. In a new report, Tech in 2030, global data  looks at the future of work and finds disruptive technologies and social movements will transform the future of work. By 2030 the four-day work week will be  widespread but not ubiquitous.
 Calls for a four-day work week have strengthened  in the last two years. This shift will be led by companies rather than governments  as they compete for talent. Automation will support but not displace jobs.  Artificial intelligence and robotics are the  technologies that will drive automation. The trend is clear. In the next few years,  even the most senior roles will employ technological augmentation.
 By 2030,  few jobs will have been fully displaced by technology. But a new class of jobs will  have emerged. Those required to manage and oversee automation technology.  Potential job titles include robot handler or AI director. Offices will move  into the metaverse. AR, VR, digital twins, AI, 5G, collaboration tools and wearable tech will drive the shift.
 Metaverse workspaces will allow socialisation with colleagues,  project collaboration and gamified learning activities while working from home.  In 2030, the world and its people won’t be the same.

 

(2) Will the Future Be Human? – Yuval Noah Harari – YouTube

 

Transcript:

 Good afternoon, everybody, and welcome to a conversation with Professor Yuval Noah Harari.  My name’s Gillian Tett. I’m the U.S. Managing Editor of the Financial Times.  Now, there are not many historians who would be put on the main stage of the Congress Center of the World Economic Forum, sandwiched between  Angela Merkel and Macron.

 I think there are even fewer who could fill the room almost  as much as Angela Merkel, and almost none who would have the experience as we were waiting  in the green room, and Angela Merkel came through, Chancellor Merkel came through,  she took care to stop, go up to Yuval and introduce herself and say,  I’ve read your book.  Pretty amazing.  But Yuval Harari has written two very important books  which have really shaped the debate,  not just inside governments,  but inside many businesses and many non-governmental organizations too.

 One of them I imagine most of you read, Sapiens.  Hands up who in the room has read Sapiens?  Okay, well, that is pretty impressive.  His second book, Homo Deus, took those themes of sapiens, looking at the history of mankind,  threw it into the future, and looked at the issue of digital.  He’s got a third book coming out this summer, 21 Lessons for the 21st Century,  which is going to look at the present.

 But what he’s going to be talking about today is something that actually Chancellor Merkel  touched on in her own speech, which is the question of data.  And what do we do about data today. His ideas are very provocative, very alarming and something  that all of you should pay very close attention to now.

 Professor Harari, Professor Yuval,  the floor is yours.  Thank you.  So, hello everybody. Let me just have one minute to get friends with this computer  and make sure everything is okay.  And can I have a bit more light on the audience  so I can see the faces and not just speak to a darkness?  Thank you  So I want to talk to you today about the future of our species and really the future of life  We are probably one of the last generations of Homo sapiens.

 Within a century or two, Earth will be dominated by entities  that are more different from us than we are different  from Neanderthals or from chimpanzees.  Because in the coming generations,  we will learn how to engineer bodies and brains and minds.  These will be the main products of the economy, of the 21st century economy.

 Not textiles and vehicles and weapons, but bodies and brains and minds.  Now how exactly will the future masters of the planet look like?  This will be decided by the people who own the data.  Those who control the data control the future,  not just of humanity, but the future of life itself.  Because today, data is the most important asset in the world.

 In ancient times, land was the most important asset.  And if too much land became concentrated in too few hands,  humanity split into aristocrats and commoners.  Then in the modern age, in the last two centuries,  machinery replaced land as the most important asset.  And if too many of the machines became concentrated in too few hands,  humanity split into classes, into capitalists and proletariats.

 Now data is  replacing machinery as the most important asset and if too much of the  data becomes concentrated in too few hands, humanity will split not into classes, it will split into species, into different species.  Now why is data so important?  It’s important because we’ve reached the point  when we can hack not just computers,  we can hack human beings and other organisms.

 There is a lot of talk these days about hacking computers, and email accounts, and bank accounts, and mobile phones,  but actually we are gaining the ability to hack human beings.  Now what do you need in order to hack a human being?  You need two things.  You need a lot of computing power and you need a lot of data, especially biometric data.

 Not data about what I buy or where I go, but data about what is happening inside my body  and inside my brain. Until today, nobody had the necessary computing power and  the necessary data to hack humanity. Even if the Soviet KGB or the Spanish  Inquisition followed you around everywhere 24 hours a day, watching  everything you do, listening to everything you say.

 Still, they didn’t have the computing power and the biological knowledge necessary to  make sense of what was happening inside your body and brain and to understand how you feel  and what you think and what you want.  But this is now changing because of two simultaneous revolutions. On the one  hand, advances in computer science and especially the rise of machine learning and AI are giving  us the necessary computing power.

 And at the same time, advances in biology and especially in brain science are giving us the  necessary understanding, biological understanding. You can really summarize a  hundred and fifty years of biological research since Charles Darwin in three  words. Organisms are algorithms. This is the big insight of the modern life sciences.  That organisms, whether viruses or bananas or humans,  they are really just biochemical algorithms.

 And we are learning how to decipher these algorithms.  Now, when the two revolutions merge, when the infotech revolution  merges with the biotech revolution,  what you get is the ability to hack  human beings. And maybe the most important invention  for the merger of infotech and biotech  is the biometric sensor that  translates biochemical processes in the body  and the brain into electronic signals that a computer can store and analyze.

 And once you have  enough such biometric information and enough computing power,  you can create algorithms that know me better than I know myself.  And humans really don’t know themselves very well.  This is why algorithms have a real chance of getting to know ourselves better.  We don’t really know ourselves.  To give an example, when I was 21,  I finally realized that I was gay  after living for several years in denial.

 And this is not exceptional.  A lot of gay men live in denial for many years.  They don’t know something very important about themselves.  Now imagine the situation in 10 or 20 years  when an algorithm can tell any teenager  exactly where he or she is on the gay-straight spectrum,  and even how malleable this position is.

 The algorithm tracks your eye movements,  your blood pressure, your brain activity,  and tells you who you are.  Now maybe you personally wouldn’t like to make use of such an algorithm.  But maybe you find yourself in some boring birthday party of somebody from your class  at school and one of your friends has this wonderful idea that I’ve just heard about  this cool new algorithm that tells you your sexual orientation and wouldn’t it be  very a lot of fun if everybody just takes turns  testing themselves on this algorithm as everybody else

 is watching and commenting. What would you do?  Would you just walk away? And even if you walk away  and even if you keep hiding from your classmates or from yourself,  you will not be able to hide from Amazon and Alibaba and the secret police.  As you surf the internet, as you watch videos or check your social feed, the algorithms  will be monitoring your eye movements, your blood pressure, your brain activity,  and they will know.

 They could tell Coca-Cola that if you want to sell this person  some fuzzy sugary drink,  don’t use the advertisement with the shirtless girl,  use the advertisement with the shirtless guy.  You wouldn’t even know that this was happening,  but they will know,  and this information will be worth billions.  Once we have algorithms that can understand me  better than I understand myself,  they could predict my desires,  manipulate my emotions, and even take decisions on my behalf.

 And if we are not careful, the outcome might be the rise of digital dictatorships.  In the 20th century, democracy generally outperformed dictatorship because democracy was better at processing  data and making decisions.  We are used to thinking about democracy and dictatorship in ethical or political terms,  but actually these are two different methods to process information.

 Democracy processes information in a distributed way.  It distributes the information and the power to make decisions between many institutions  and individuals.  Dictatorship, on the other hand, concentrates all the information and power in one place. Now, given the technological  conditions of the 20th century, distributed data processing worked better than centralized  data processing, which is one of the main reasons why democracy outperformed dictatorship  and why, for example, the US economy outperformed the Soviet economy.

 But this is true only under the unique technological conditions  of the 20th century.  In the 21st century, new technological revolutions, especially AI and machine learning, might  swing the pendulum in the opposite direction. They might make centralized data processing  far more efficient than distributed data processing.

 And if democracy cannot adapt to these new conditions, then humans will come to live  under the rule of digital dictatorships.  And already at present, we are seeing the formation of more and more sophisticated surveillance  regimes throughout the world, not just by authoritarian regimes, but also by democratic governments.  The US, for example, is building a global surveillance system,  while my home country of Israel is trying to build a total surveillance regime in the West Bank.

 But control of data might enable human elites to do something even more radical than just build digital dictatorships. By hacking organisms, elites may gain the power to re-engineer the future of life itself.  Because once you can hack something, you can usually  also engineer it.

 And if indeed we succeed in hacking and engineering life,  this will be not just the greatest revolution in the history of humanity,  this will be the greatest revolution in biology since the very beginning of life four billion years ago  for four billion years  Nothing fundamental changed in the basic rules of the game of life  all of life for four billion years  Dinosaurs amoebas  Tomatoes humans all of life was subject to the laws of natural selection  and to the laws of organic biochemistry.

 But this is now about to change.  Science is replacing evolution by natural selection  with evolution by intelligent design.  Not the intelligent design of  some god above the clouds, but our intelligent design and the intelligent  design of our clouds, the IBM cloud, the Microsoft cloud, these are the new  driving forces of evolution.

 And at the same time, science may enable life, after being  confined for four billion years to the limited realm of organic compounds,  science may enable life to break out into the inorganic realm. So after four  billion years of organic life shaped by natural selection, we are entering  the era of inorganic life shaped by intelligent design.  This is why the ownership of data is so important.

 If we don’t regulate it, a tiny elite may come to control not just the future of human societies,  but the shape of life forms in the future.  So how to regulate the data, the ownership of data?  We have had 10,000 years of experience regulating the ownership of land.  regulating the ownership of land.  We have had a few centuries of experience regulating the ownership of industrial machinery.

 But we don’t have much experience  in regulating the ownership of data,  which is inherently far more difficult  because unlike land and unlike machinery,  data is everywhere and nowhere at the same  time.  It can move at the speed of light and you can create as many copies of it as you want.  So does the data about my DNA, my brain, my body, my life, does it belong to me, or to some corporation, or to the government, or perhaps to the human collective.

 At present, big corporations are holding much of the data, and people are becoming worried  about it.  But mandating governments to nationalize the data  may curb the power of the big corporations  only in order to give rise to digital dictatorships.  And politicians really, many politicians at least, are like musicians.

 And the instrument they play on  is the human emotional and biochemical system.  A politician gives a speech and there is a wave of fear all over the country.  A politician tweets and there is an explosion of anger and hatred.  Now I don’t think we should give these musicians most sophisticated instruments to play on.

 And I certainly don’t think they are ready to be entrusted  with the future of life in the universe,  especially as many politicians and governments  seem incapable of producing meaningful visions for the future,  and instead what they sell the public  are nostalgic fantasies about going back to the past.

 And as a historian, I can tell you two things about the past.  First of all, it wasn’t fun.  You wouldn’t like to really go back there.  And secondly, it’s not coming back.  So nostalgic fantasies really are not a solution.  So who should own the data?  I frankly don’t know.  I think the discussion has just begun.

 Most people, when they hear the talk about regulating data,  they think about privacy, about shopping, about companies, corporations  that know where I go and what I buy.  But this is just the tip of the iceberg.  There are much more important things at stake.  So the discussion has hardly begun and we cannot expect instant answers.

 We had better call upon our scientists, our philosophers, our lawyers, and even our poets,  or especially our poets, to turn their attention to this big question.  How do you regulate the ownership of data?  The future not just of humanity, but the future of life itself, may depend on the answer to this question.

 Thank you.  Well, thank you, Professor Harari, for an absolutely brilliant, thought-provoking,  and it must be said, somewhat challenging and depressing talk.  I must say, I’m quite starstruck sitting here listening to that stream of ideas.  And I’d like to start with a very simple question, which is this.

 You paint this picture of a future that’s quite scary.  How soon do you expect that future to be here?  Are we talking about two years, 20 years, 200 years?  I mean, how soon could we be dealing with digital dictatorships?  I think that the time scale is decades.  I mean, in 200 years, I guess there won’t be any sapiens left.

 There’ll be something completely different.  Two years, it’s far too soon.  So we are talking about a few decades.  Nobody knows exactly how many. Right. Now, you’re unusual because you actually stood up on that  stage and you said, I don’t know what the answer is. Okay.

 That’s not something you hear a lot at  the World Economic Forum. It’s admirably humble. But I’m curious, as you look around the world today, do you see any countries or any groups  of people or any academic groups that seem to be having a sensible debate about this?  Do you see any reason for encouragement at all?  Well, I think the world is divided into a very, very small group of people and institutions  who understand what is happening and what is at stake.

 And the vast majority, not just of ordinary people, but even of politicians and business  people who are not really…  Yes, they hear about data, yeah, data protection, there are cyber attacks, somebody might steal  my identity or my bank account details.  But as I said, it’s just the tip of the iceberg.

 I think that my guess, I don’t know,  but I guess that some of the big corporations,  like Google, like Facebook, the usual suspects,  they understand what is at stake.  I also think that some governments,  especially the Chinese government,  I think they understand  what is at stake.  I think most, certainly most humans have no idea.

 Right.  Again, the thing is just to make it clear, it’s the biometric data is the key.  When people think about data, they mostly think about where I go, what I buy.  When they think about hacking, they think about computers.  They talk about AI, about machine learning.  They forget the other side of the equation,  which is the life sciences, the brain sciences.

 The brain sciences are giving us access to here.  This is what we really tried, we.  What somebody is really trying to hack is this, not this.  Right.  I mean, China is this, not this. Right. I mean China is interesting because I remember sitting at a table a few years ago in Davos  with a senior Chinese official who, and we were arguing about democracy, and he said,  well, you in the West have democracy, we have social media.

 And the point was that Chinese government is using social media to not just monitor  its citizens,  but also act as a weather vane to gather information  about what’s happening in terms of public sentiment  and ensure that they stay one inch ahead of that  to stop any explosions.  Do you see China as a place  where this type of digital dictatorship  is most likely to emerge?  Well, I don’t know.

 As I said, as I gave examples, you have cases in the West.  And I know maybe best about my own country  that Israel is building a real total surveillance regime  in the West Bank, which is something we haven’t seen  anywhere, almost anywhere in history before,  of really trying to follow every place, every  individual.

 And we still haven’t crossed the critical watershed of the biometric sensor.  Whether it’s in the US, in Israel, in China, it’s still social media.  It’s still my mobile phone.  It’s still where I go, what use I make of my credit card.  We still don’t really  have the technology to go inside, but we are maybe five years, 10 years away from having the  technology.

 So maybe to give an extreme example, let’s say you live in North Korea and you have to  wear this bracelet, which constantly monitors what is happening inside your body and you walk into a room and you see the picture  of the dear leader on the wall and the bracelet can know what is happening to your brain,  to your blood pressure as you see this picture.  So this really is what is meant by a digital dictatorship.

 I mean it makes 1984 sound positively…  Child’s play.  Child’s play, exactly.  You say you don’t know what to do about this,  but imagine for a moment that you were dictator,  be that digital or not.  What would you do right now to help humanity deal with this?  Would you like to just throw away all of those biometric devices?  No, it’s absolutely impossible to go back, especially in terms of technology and science. Even if one country or an entire continent is freaked out by the possibilities and they

 say we stop all research in this field, you can’t force other countries to do the same.  And then you have your own, I mean, you have a race, a race  to the bottom.  Unless you have some global agreement on how to deal with  this, then no country would like to stay behind in the  race.  So do you want the scientists to take control?  Do you want the United Nations?  Do you think the United Nations is capable?  The World Economic Forum?  I mean, could all the people here take control of this, do you think?

 The discussion has just begun.  I don’t think we should panic.  We should just, first of all, be aware that this is what we are facing.  And there are many possibilities, also technological possibilities.  How, for example, I mean,  when we talk about regulating ownership of land,  we have a very clear picture what it means.

 Okay, you have a plot, you have a field,  you build a fence around, you have a gate, you stand at the gate and you say,  okay, you can come in, you can’t.  That this is my field.  Now, what does it mean in terms of the data about my DNA or what’s happening  in my brain? I mean, what’s the analogy of the fence and the gate? We just don’t understand.

 So I think we are in an analogous position to where we were with the Industrial Revolution  200 years ago. And you just need time. I mean, when you start a discussion,  I know this from class, from university, you start a discussion and somebody raises the  hand and says, okay, but what should I write in the test? And no, no, no, we are not there yet.

 We should first have a discussion about this. I don’t have all the answers.  Right. I mean, one thing I find fascinating in your description of the digital economy  is that it actually involves a picture of society which is not quite the picture that  normal economists have.

 Because most of the digital exchanges today don’t actually involve  money. People are giving up data in exchange for services. And that’s something that no  economic model can capture right now. And frankly, the legal models can’t either in  terms of the antitrust. So I’m curious, when you look at this problem, it’s not quite economics.  It’s certainly not just computer science. It’s not really any particular discipline.

 Do you think this means that universities need to rethink how they categorize academics?  I mean, who is going to take this overarching view to try and talk about these issues?  And I should say, I’m trained as an anthropologist, so I’d love to say the anthropologist, but  I’m not waiting on them either.

 No, hopefully everybody.  I mean, I think that today, if you’re a computer scientist, you also need to be, to some extent,  a philosopher and an anthropologist.  It’s now part of the business.  And I think maybe, again, to try and focus it,  you talked about different exchanges in the world.  Maybe the most important exchange in this respect  will be in healthcare.

 The big battle over what we today call privacy  will be between privacy and health.  Do you give access to what is happening inside your body and brain  in exchange for far better healthcare?  And my guess is that health will win, hands down. People will give up  their privacy in exchange for healthcare and maybe in many places they won’t have  a choice.

 I mean they won’t even get insurance if they are unwilling to give  access to what is happening inside their bodies. Right, so another big exchange  that will not involve money but still still be very, very important.  Last quick question, then we must wrap, sadly.  When it was all about land control, the elites essentially had feudalism.  We called it feudalism in history.

 When it was all about the industrial machines, we had capitalism and Marxism.  Have you thought of a word to describe this new world of dataism?  Yeah, I try dataism, but I don’t know.  I mean, words have their own life.  And what word catches and what word doesn’t,  it’s really a coincidence.  Well, maybe answers on the postcard,  if anyone in the hall has an idea,  or tweet it out, or send them an email,  or whatever digital communication you like.

 

(2) PostHuman: An Introduction to Transhumanism – YouTube

 

Transcript:

 Every aspect of our lives has been reshaped by technology. From the way we get around,  the way we seek information, and the way we communicate. It’s easy to think that if only  our technology advances enough, we’ll finally be satisfied. The fact is we remain shackled by our primitive Darwinian brains.

 Humanity, for whatever progress we have made, is the result of an unguided, natural, 3.8 billion year long experiment of chemistry.  Evolution is the process that has made you what you are.  But it is not far-seeing.  It does not, and cannot, consider the future,  make decisions about where we ought to go, how we ought to be.

 Passing on genes is the only objective.  But as thinking human beings we care about far more than that.  Consciousness means that we have the capacity to experience the world, to reflect upon and,  most importantly, to shape it.  And so what begins as humanism, our most sympathetic understanding and treatment of human nature, becomes transhumanism,  the drive to fundamentally revolutionise what it means to be human by way of technological  advancements.

 Changing human nature might be the most dangerous idea in all of human history, or perhaps the  most liberating.  Generally speaking, transhumanist thought does two things.  First, it considers current trends  to see how future technologies will develop  and how they might affect us.  Second, it calls for the use of current and upcoming technology  to bring about beneficial societal change.

 We’ll examine three central areas of transhumanist thought. social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, social, your intuitions flowing. Consider this. An evil organization creates an airborne

 virus. It infects you and the entire human race. As a result 100,000 people  are dying every day. Within 30 years one in seven, a billion people, will have died  because of the virus. Now how much money should world leaders put into research  to develop an antidote? How high on a list of global priorities would you rate  this? There is no denying the situation would be dire.

 Most people would demand  immediate action. But hey, this is just a thought experiment, right? Not quite.  100,000 people really do die every day from diseases caused by aging.  But no one treats aging as a global priority.  So what explains this double standard?  Are we simply resigned to death by aging?  Aubrey de Grey, an expert in research on aging, argues that our priorities are fundamentally skewed  and that we must start thinking seriously about preventing the huge number of deaths due to aging,  the greatest cause of fatal diseases in the Western world.

 The goal of this strand of transhumanism is super longevity.  Today, we have the minds and the equipment to begin developing technologies to combat ageing.  Unfortunately we lack the will and the financial support to do so.  Most of us are so accustomed to the idea of growing old that ageing seems like just a  fact of life.

 If modern medicine is supposed to keep us alive and healthy for as long as possible,  then the anti-ageing movement takes medicine to its logical conclusion. It’s what happens when, as long as possible, means as long as we want.  But what would a world without ageing look like? How would we manage the huge population  growth? And who would own the technologies that make it possible?  These are huge questions, but we only have time to raise them. We’ll investigate them in depth in future videos.

 Let’s move on to the next area of transhumanist thinking.  Every year computers are getting more powerful.  What used to fill up a room now fits in our pockets.  More crucially, the time  it takes for computer power to double is also getting shorter. At the outset of computing,  the doubling process took 18 months, and this interval appears to be getting smaller.

 Plot  this on a graph, and it’s not a straight line but an exponential upward curve. We need only  project into the future to see that there is a point  at which this line is practically vertical. A moment in human history  referred to as the technological singularity. The futurist thinker Ray  Kurzweil postulates that as these technologies develop we will likely edit  our bodies in order to integrate with computers more and more. This concept should be familiar.

 We’re already in a symbiotic relationship with technology.  You can send your thoughts at incredible speeds to recipients on the other side of the planet,  find your precise location using satellites, and access the world’s repository of recorded  human knowledge with the device you carry with you at all times.

 And all of this was unthinkable 20 years ago.  Out of this predicted computer capability explosion may eventually come artificial intelligence,  a simulated consciousness in silicon.  Given the rate at which an AI will be able to improve itself,  Given the rate at which an AI will be able to improve itself, it will quickly become capable of thought with precision, speed and intelligence  presently inconceivable to the human mind.

 If Kurtzweil is right and we end up integrating ourselves with technology,  we could be in private contact with this AI whenever we choose.  The result of this is that we effectively merge with this AI  and its abilities become our own.  This would propel the human race into a period of super intelligence.

 But perhaps, as some argue, no non-biological computer could ever become conscious.  Or what if, as every other dystopian science fiction plot goes, this AI’s goals differ  from our own?  And what does our increasing reliance on computers mean for our future?  Super longevity and super intelligence are all well and good, but only in so far as they  make us happier, more fulfilled, more content.

 Let’s look at the last section, which deals with the issue of well-being.  Imagine you’re soon to be a parent.  Your doctor informs you that, if you wanted, you could choose certain features of your child’s biology.  You could choose how genetically prone to depression they will be, their levels of anxiety, jealousy, anger and even their pain threshold.

 Would you choose a high likelihood of chronic depression, an  intolerably low pain threshold? How about panic attacks and anxiety? Probably not.  The last major branch of transhumanism spearheaded by philosopher David Pierce  aims to investigate and phase out suffering. He argues that ultimately all  our conscious states, our feelings, moods and emotions are all an expression of our brain chemistry.

 For Peirce, it is clear that natural selection hasn’t designed us to be happy, it’s designed us to be good at surviving and passing on genes.  A species that is permanently anxious and discontented will have a higher motivation to watch out for predators and take precautions for survival. But in today’s world, these emotions are vicious.

 Our biology has barely changed in 200,000 years, which means that whilst culture and  society has arguably made progress, we are still those same aggressive, jealous, anxious,  savannah-dwelling hunter-gatherers.  This is why Pierce argues that if we ever hope to increase the well-being of our species,  we will have to edit our genes.

 Minimizing our suffering, and the suffering of those we care about,  is a crucial part of what drives us.  Hence, so-called abolitionists argue that we start using  modern technologies to do exactly that, minimize and eventually abolish suffering,  ushering in an era of so-called super well-being.

 At present, every child is a  roll of the genetic dice. Peirce argues that the least we can do is load the  dice in our favor to create happier,  healthier, longer living humans.  But might our compassion, curiosity and pursuit of knowledge become secondary to our hedonism?  If we’re all content, why visit the stars?  And isn’t suffering sometimes a good thing?  These are the three key areas of transhumanist thought, and we’ve only begun to scratch  the surface.

 The three supers – super longevity, super intelligence, and super well-being – might  radically change human history if, or when, they are realised.  One of the main issues facing transhumanist ideals is that they are seen as far-fetched  or perceived as just science fiction.  But this is a big mistake.

 We are already transhuman.  We’re living longer, integrating more with technology and emphasising quality of life. We’re in the process of redesigning  what it is to be human, only the effects are still so subtle and so slow that it  doesn’t look like much.

 But these changes will come faster and faster and it’s  only wise to be an active informed participant in the next stage of human  development.  Thanks for watching. Thank you. you

 

Transhumanism: The World’s most dangerous idea? | Stefan Lorenz Sorgner @ Meltingpot 2022 – YouTube

Transcript:

 Yes, I’m a philosopher of transhumanism in particular.  The way I got there was really by Nietzsche.  And then I realized from Nietzsche’s thinking onwards,  I realized all the technological challenge which  have been brought about.  And they were all connected with transhumanism.  I guess many might not have never heard  what transhumanism is.

 I’m curious. Will you lift your hands?  Who’s heard the term transhumanism? Oh  Actually quite a few have heard the term transhuman  So if if I explain that to an audience of sort of entrepreneurs  I would rather say well Elon Musk self identifies as a transhumanist  That’s usually and most of his companies are  actually related to transhumanist endeavors, to central goals within  transhumanism.

 If I usually explain what transhumanism is all about to a younger  audience, sort of at universities, I usually refer to Black Mirror and  whoever hasn’t seen sort of the Netflix series Black Mirror before I strongly recommend it to everyone because basically  Black Mirror is the best way to introduce transhumanism  To a wider audience. It’s a wonderful series  It just works as a great storyline and all the topics which are being covered by transhumanism  Are being dealt with in the in being dealt with in the series Black Mirror.

 And what I’m doing is basically all the challenge we are being confronted with here in this series  by the latest technological developments, by artificial intelligence, cyborg technologies, gene technologies,  they really challenge our most fundamental understanding of who we are as human beings. And that’s what I’ve realized.

 And I’ve realized sort of there  are not many philosophers, the entire culture in which we have been brought up  in the past, you know, 2,500 years have been structured by a way of  thinking which is now being challenged by transhumanism. And we are still living in an era, we are still  living in a cultural framing, in a cultural structure which is  dominated by a way of thinking which is sort of called dualistic in the sense  the entire culture is structured in a way that there’s an immaterial soul and there’s a material body.

 And that has consequences on the legal level.  And this is how, for 2,000 years, people have conceptualized who we are, how we see ourselves, how we relate to the environment, how we relate to technology.  And basically, transhumanism is challenging all of that.

 The most transhumanists see themselves as humans as completely being embedded in the  world. We are not, we don’t have that immaterial divine spark. We are fully  part of this world. We are only gradually different to other animals.  And that raises an enormous amount of challenges.  It basically leaves no aspect of the life world untouched.  So from economics over ethics to the arts, everything’s being revolutionized by this new way of thinking. And just to give you an understanding, just as one little understanding to show how the traditional Judeo-Christian worldview has created encrusted structures in which we still live and which are basically still being shared in most parts of the world on the legal level. If you look at the constitutions, in most

 constitutions it’s still the case animals are seen as things. Animals  should be treated legally as the thing. And that is founded in  the Judeo-Christian tradition going back to Plato. This is like  2,000 years of thinking. Because according to this  understanding, it was only we humans are special because we have that divine spark.

 And now in the  past 100, 200 years after Darwin, we’ve realized, well, maybe we are not that special. This is a  human hubris. We should be much more modest about ourselves.  We’ve come about as part of evolutionary processes  just like all the other animals too.  We are not special animals.  We have some capacities which other animals have,  but other animals have special capacities  which we don’t have.

 Like the vampire bats in South America, they can live by eating blood only, which we can’t do.  So they’re different. Animals have different capacities.  So why do we see ourselves as such, you know, special?  And we talk about, and basically in most constitutions, in most legal frameworks all over the world, it’s still seen, we humans  possess dignity, we possess personhood, other animals should be treated like things.

 And that’s highly problematic.  And more and more people have realized that, that we are not so different.  And that’s the basic starting point also from transhumanism, that we are  part of this world. And we are part of this world, we came about as a consequence of evolutionary  procedures, and we only came about, you know, the last common ancestor between us and great apes lived on Earth about 6 million  years ago.

 We as Homo sapiens have come about probably 400,000 years ago.  Homo sapiens maybe 45,000 years ago.  So let’s look ahead in the future.  Where will we be in another 400,000 years?  We will not be here anymore. We will have evolved further. Chances are extremely  high that this will happen because everything’s undergoing a process of change.

 And in contrast  to the past times, we more and more have the capacity to actually influence evolution, to  enhance evolution. We can alter who we want to be. We can use gene technologies,  you know, in order to modify us, maybe get a green skin. And then we fly to Mars, and then we have  used photosynthesis in order to get our energy.

 And I’ve got a friend from the Netherlands who  actually made that change on a zebrafish. He genetically engineered a zebrafish,  on a zebrafish. He genetically engineered a zebrafish, and as a consequence, the zebrafish had the possibility  to get 15 to 20 percent of its nutrition by means of photosynthesis.  And it was a side effect that the zebrafish turned slightly green.

 And so, well, if we want to fly to Mars, if we want to leave Earth, and we might have  to leave eventually, the Earth will only be around for another five billion years, so maybe we will need it earlier, I hope not, but you know, we’ll  probably need it earlier actually, then we might need an alternative way of generating  energy, and that’s, we need to talk about gene technologies, digital technologies, cyborg technologies, you know, us getting connected with deep brain stimulation,  Neuralink, what Elon Musk is realizing.

 That’s why I said there’s so many different procedures which Elon Musk realized,  and they’re all realized to transhumanist understandings.  And digitalization, I mean, the global use of the internet has only been created,  and the public use has only been created  about 30 years ago.  The smartphone has only been around for 15 years,  and we cannot imagine living without them.

 15 years on a global scale is quasi nothing nothing and if you look back in time I  mean what I said sort of we have only been around for four four hundred  thousand years and and I mean the earth has been around for 4.5 billion years  the entire universe maybe you know 14 billion years and and so so 30 years  since the internet was established is nothing and the smartphone like 15 years ago.

 And so that we can imagine what an impact digitalization will have, you know, within  15 years time, within 30 years time, because the speed of technological innovations are  even progressing further.  And so basically, these are all the understandings why we need to think about the impact of emerging technologies,  and whether we actually want to live in that traditional Judeo-Christian framework which  says only humans matter, animals should be treated like things.

 And there was one, so far, there’s one country in which personhood has been granted to an orangutan,  and that was in Argentina.  A couple of years ago, they went to the court and said,  well, orangutans actually, with respect to their capacities,  they don’t differ significantly from humans.  They should also be treated with respect,  just like a human deserves to be treated with respect.

 And in Argentina, the highest courts said,  yeah, that’s a fair judgment.  We need to grant personal to the orangutan.  And as a consequence, the orangutan  had to be freed from the zoo.  And these are the consequences.  So we can hardly underestimate the enormous amount  of consequences which that you revised understanding  concerning how we see  ourselves whether we are actually do we have that divine spark or are we merely credulity different  from other animals and if we start to rethink all the implications um and then it just brings an

 enormous paradigm shift on all different aspects of a life world and then that’s what that’s what  i realized and that’s basically i thought well it well it’s not just one person’s endeavor you know we  all participate in that new revised form of life and we are all  being confronted with the challenges which have to do with you know genome  editing, deep brain stimulation and digitalization.

 And so we need to get  together and establish that  in universities, in schools.  Think about the impact of these emerging technologies.  Should we do everything which we can do?  Where should be the limits?  And there are many procedures  which are already very well established  which are not legally, legally appropriate, which are not legally granted.

 And they bring about a challenge concerning many cultural relics.  I mean one example, I just want to give before entering further in the conversation, but  one example is in the UK for example, it is already legally granted  that one can have a child with three biological parents.  And that’s already biologically possible.

 So far, only in the case where you’ve got two mothers and one father.  But in the UK, it’s only possible  in the case where one of the mother  have a mitochondrial disease.  So that’s sort of not in the cell, in the nucleus,  but sort of it’s a power of the house of the cell.  It’s sort of floating around in the cell.

 If they have some defects there  and the child later on, which gets born later on,  would die fairly soon after having been born,  then this new procedure can be used in the UK.  But it’s an established technology.  So you basically take the cells from two mothers,  you remove the nucleus, move one nucleus into the other cell,  and then fertilize the cell.

 So you really got a cell with three biological parents.  And, you know, so far, it’s only allowed for therapeutical purposes in the UK.  But what would be the case, for example, if there’s a lesbian couple,  which says, well, we’ve got a, you know, we also want to have a biologically related child.  There’s a technology which works, and we want to use it.

 Then the governments in Europe say well  you’re not allowed to do so. That undermines our cultural establishment. You’re like a  you know, this undermines our traditional understanding, you know, and family understanding and so on.  Let’s say you’ve got two women and a man who say well, we love each other. We are polyamorous.

 We have a polyamorous bond.  We love each other.  We want to have a family together.  And there is a technology which enables us to do so.  And then you can have a child.  Then it is technologically already possible to have a child.  And then they say, well, marriage for all only means two people so far.  Why shouldn’t we open that up to three people?  You know, it’s all just a matter of,  you know, it’s even a traditional family.

 You’ve got two women, one man,  and a biologically related child.  It’s nearly a very traditional conservative family.  You’ve got all the constituents.  And so these are the cultural relics which are being challenged and undermined by using these new technologies. They open up  possibilities and make you question, you know, why should a marriage be granted that special  status in the constitution? Why should it only be limited to two people? And I’m extremely  fascinated by that and thereby have realized we all, even in the most liberal

 countries in the Western world, we all live in so many strong, relictive, paternalistic,  maybe even totalitarian in some cases, structures which prevent us from living a proper plurality,  a proper difference, a multiplicity of different lives of what it means to live a good life.  And we need to evolve.

 We need to allow the plurality of different lives to flourish.  And so I’m trying to think through and promote  and trying to show people the various implications  of what transhumanism is all about  and that it can help all of us  to live better lives, basically.  And when I deal, and my students write their thesis  on episodes of Black Mirror, they write it on,  for example, the episode Nosedive from Black Mirror.

 Nosedive is a very good episode.  It’s about sort of, if someone hasn’t seen it,  strongly recommend it.  It’s sort of basically the Chinese social credit system  being taken in advance.  So if you met someone, oh well, this person didn’t smile,  look at me, wasn’t very attentive.  Then I turn around, I identify the person,  and I reduce, you know, minus 10 points.

 And then that will have consequences for the credit store.  Next time you will book a plane ticket, that might have consequences.  And that is shown in part of the episode of Black Mirror.  But it’s already in place in China for about 10 years.  And it is consequences for the citizens.  And so this shows just a small range of the challenges  with which we are confronted with.

 And I find that highly fascinating.  And no matter which stance you have,  whether you think that’s highly dangerous,  because there are many people still,  and in particular those who have a more conservative outlook,  still think that’s an extremely dangerous idea  because it basically removes and undermines  all the static structures which we’ve been living for, you know, 2,000 years at least.

 No matter which stance you take, you need to be confronted with issues because the technologies  are being developed and you need to realize which technologies there are and which stance you would want to take.  But in order to do so, you first need to be aware of the possibilities which we already have.

 And the possibilities we already have are just enormous.  And every day there are basically new technologies coming about. the case with the Google, with LAM, Lambda, with the Google algorithm,  which basically seemed to have sentience  and seemed to be sentient and conscious.  That’s one of the Google engineers’ claim.

 And that raises issue concerning how to treat algorithms.  But yeah, I don’t want to just ramble on,  but this is sort of the range of topics  which are being covered by the issue of transhumanism.  Okay, thank you for introduction on transhumanism.  I told you that it’ll be a very interesting guest,  so I think we can now recognize.

 Well, the subject is whether it’s a threat  or a solution for the future, if we start to think more  about transhumanism as a solution for the future, if we start to think more about transhumanism as a solution for the future?  What would your opinion be?  Exactly, the solution, at least as a very good suggestion, as a helpful procedure.

 I mean, when I hear people doubt or when people claim transhumanism is such a dangerous idea,  then I just, when they think, oh, it’s  just, that’s just something for the Swiss, or maybe the people from Silicon Valley, it’s  just something for the rich.  And then you look at the statistics, and is that really the case?  And then you just look, there’s a very good website, so whenever you look, when you look  up for some statistical empirical information concerning  how the world has changed in various respects, there is a website from the University of Oxford.

 It’s called Our World in Data. And on the best possible empirical data, you find results from  global poverty, from unemployment, to what are the dominant political systems over the centuries.  And you find a very good statistical result.  So when you look back 200 years, what was the global poverty rate, the absolute poverty rate?  And I’m asking that to my students.

 And then I wonder, you know, how has it been developing the past 200 years?  And many people think it’s going downhill.  Now the rich are getting richer and the poor are getting poorer.  And there’s such a big gap and we are really worse off.  It used to be much better in the good old days.  And then you look at the actual statistics and you find 200 years ago,  and you find 200 years ago, all over the world,  we’ve had an absolute poverty rate of more than 90%. And I’m not talking of a relative poverty rate,

 relative to the living conditions in the various places.  I’m talking about an absolute poverty, like people just struggling to survive,  just having shelter and a place to live.  And basically, more than 9 out of 10 people were just struggling to survive all over the world 200 years ago.  Even in a developed country like the UK, that applied to more than 8 out of 10 people.

 That’s an enormous range of people. 200 years ago in England, we’ve had six years  old who work in coal mines and seven years old who work in closing factories. And no  one had 30 days of vacation. And nowadays, how has it changed in our times? Because many  of the technological developments took place during the past 200 years.

 Now we’ve got on a global scale, we’ve got an absolute poverty rate of about 10%.  That’s still too high.  I mean that 10% of the people are still in the world just struggling to survive, don’t  have enough food to eat, they don’t have appropriate  shelter. But in comparison to 200 years ago, that’s a significant improvement.

 And that’s,  so just to show that, you know, it’s just in the interest of the rich, the technology, that’s a  clear way to disprove that. It’s in the interest of the majority of people because for the majority of people it’s just important, you know, it’s better to have food and  shelter and to live longer than what the situation was about 200 years  ago.

 And then you look at the same statistics, what about the development  concerning the average life expectancy? And we’ve had an average life expectancy,  I mean, 2,000 years ago, it was 30 years.  200 years ago, it was about 40 years.  Now, even in the poorest countries in the world,  even in some better-off countries like Nigeria,  they are, you know, 60 years.  In the best, you know, in the most flourishing countries  here in Europe, they’re about 80 years.

 In the richest countries, they might be 90 years.  But it shows there’s been a significant improvement.  The life expectancy in the world has doubled in about 200 years  because we’ve got developed technologies like vaccinations, anesthetics.  Technology is also just like better, cleaner water, hygiene, better education.

 That’s also a technology.  When we talk about technologies, we shouldn’t only think about, you know, digitalization and the Internet.  We also just think about, you know, getting the water clean, getting people educated.  And that is part of a revised understanding of technologies also.  When I talk about we need a different way of thinking who we are as human beings,  one of my books which came out this year is called  We’ve Always Been Cyborgs.

 And you’re saying, well, I don’t have an implant, so why am I a cyborg?  But here the understanding of cyborg is actually a wider one.  And cyborg stands for a cybernetic organism.  Cybernetic comes from the ancient Greek.  It means the steers person of a ship, someone who’s responsible for directing a ship.

 So a cyber organism is a steered organism, an organism which has been  altered. And in the history of Western thinking, we used to believe that, you know, language is  what makes us special. And language is sort of the human capacity. And where do we get language from?  Well, according to the Catholic Church, since 1869, it came from the divine spark.

 God placed it as a divine spark and attaches it to our body when fertilization happens.  And that has consequences for abortion and all the other debates.  Why? At that stage, we are quasi divine entities,  and that’s how we gain language.  And that’s still a widely shared understanding.  But in the understanding that we’ve always been cyborgs  means we’ve always been steered,  we’ve always been altered organisms.

 And when did we, how did we acquire language? Well, when I was born,  I didn’t have language. You know, we don’t have language when we’re born. We might have certain  prerequisites, but then we get it as an upgrade from our parents. Our parents upgrade us with  language. Our parents, our cultural surroundings, it’s the first alteration which we get. So language is just a parental upgrade.

 And  then we go to school and then we get further upgrades. We learn history,  mathematics, engineering, whatever. And now we deal with digital technologies, gene  technologies. And once we realize sort of even language, which is extremely useful,  that’s what, you know,  makes us, helps us structure our lives, communicate, that’s just a technology which has become  a part of our body, then we can see the relevance of these new technologies.

 All these new technologies, gene technologies and so on, CRISPR-Cas9, genome editing, they  are just in tune with what we’ve  always been doing we use the technologies in order to increase the  likelihood of living good lives and and and so far if you know doubling the life  expectancy reducing the global absolute absolute poverty range that’s just  significantly improving the quality of life.

 We live healthy and longer,  and that’s why I think, you know, transhumanism is very much in the interest of global justice,  and it’s very much in the interest of every one of us living to his or her own demands, being able,  you know, morphological freedom.  We should have the right to use technologies and to alter our bodies in the sense how we want to be seen.

 We should have the right to use reproductive technologies  in the way we want to use them,  whether it’s in a polyamorous relationship to have a child  or whether we want to use it in a different way.  And that just shows it improves the plurality and the diversity of our choices. And I think  that’s a wonderful, wonderful achievement. And we really need to take care of that.

 This  still prevails. And we live in such a, you know, in such a diverse and open society.  I think there was a very important sentence because you said we should have the right  to use the technologies.  And I am also a big fan of technologies.  That’s why I’m here.  And I was inviting several people  who are involved in technologies.

 In fact, you talk a lot about the genetic  or biotechnologies.  We are just after two years of pandemic,  which is still in some way ongoing,  at least in Asia now and also in other  countries. And to have the right is something different than to have the obligation or to have  the order from the government to use the technologies.

 And we all remember the adopted  RNA proteins called vaccination against COVID, which is also the biotechnology which majority  of the people used in developed world.  We had the discussion, the ethical discussion, whether the rich part of the world should  give for free the vaccination to Africa and to developed countries and underdeveloped  countries.

 to Africa and to developed countries, underdeveloped countries.  But at the same moment, even having the right,  and some people could say, no, we don’t want to use this vaccination,  I have a feeling that these new technologies,  and the COVID is just a good example, are dividing the community.  Maybe it’s a cultural aspect, but what I feel that in some cases, and it was very much visible not only in COVID,  but for example, in the high-tech technologies like 5G networks,  there is a community of people strictly opposing the use of these technologies.

 And they are becoming both groups of the divided community quite aggressive even,  because they can even  attack each other if they talk about this technology. So we should realize that there  is a danger, and we should work with it. Well, I’m not that big expert in transhumanism as you are,  but I was involved in artificial intelligence systems, and I always think in a way that when we started the computers,  which is approximately 40 years ago or 50 years ago,  then we had software.

 We got viruses as the danger.  But now we have corporations globally protecting our rights  because we have antiviruses.  Maybe there should be also systems protecting our rights because we have antiviruses. Maybe there should be also systems protecting our rights  in this transhumanism world  when we  become transhumans and when we  change, let’s say,  the way how we respect other people.

 I also respect it. We do want to be a family  with two women and one  man. Well, being a  Muslim, you can have up to four women, so  they are maybe further in that respect.  But what’s your opinion about this danger?  How we should really treat this in an ethical way,  because there is a danger, and all these technologies can be abused.

 There’s always a danger.  And there’s always a danger of an enormous abuse.  If something gets in the wrong hands, it can be used against it.  The more efficient a technology gets, you can wipe out humanity, you know, altering  using gene technologies, create a virus, which may now, this has become too realistic.

 But  you know, of course, there are risks associated with these new technologies.  But that shouldn’t prevent us from developing them.  Because we need to take the risk.  Because the advantages which go along with these technologies have brought about so many beneficial aspects. If something gets too risky, obviously we need to abandon that.

 It’s a matter of calculation concerning the risk of the further implications.  And it’s not that every new innovation is something which has to be embraced.  If there’s something where you see the calculations as such,  that has less positive aspects, more dangerous ones,  then we need to abandon it and develop something  which is more reliable.

 But if something becomes reliable, then also,  and here the other issue which you mentioned comes up,  what about if the right turns towards a duty?  Can it be sometimes a duty to develop a new technology, that you ought to use something?  The question is, does it have to be problematic if you’re actually, if you’re obliged to use a technology?  And then I wonder, you look back in history, you look back in history just about, let’s say, let’s say 30 years, 40 years.

 40 years, you go to university 40 years ago.  How do you write your thesis?  How do you hand in your exam?  Computers were not around, widely available.  You had handwritten exams.  Everyone was just, you may be on a typewriter,  but normally handwritten or typewritten exams.  Do you know any of your professors?  Do you know any of your high school teachers  who would accept a handwritten exam nowadays?  Or thesis nowadays?  No one accepts it.

 Why?  Because the advantages of a PC,  of the internet connection with a PC, are too big.  You know, the PCs, the smartphones, the tablets, they’ve become cheap, reliable.  And, you know, in the developed world, everyone can afford them.  We all need to use them.  And so here, if the advantages are such, if the risks are there, there there could be viruses and we need to protect ourselves against computer viruses  But the the advantages of these new tech of PCs or smartphones  You know are so enormous that it has become a duty for you to basically use

 Digital devices in schools and universities in order to hand in your exams  so it doesn’t even have to be problematic if a right turns into an obligation.  And there are also some problematic issues, obviously.  And I’m very open, actually, on that issue.  I just want to raise the question.  Let’s say parents can not only educate their children, but  let’s say parents, you know, and we can already do so, like can genetically  modify their children.

 And it becomes a traditional, it becomes an  established practice that you can make a small genetic modification and it leads  to, it’s very reliable it is as you know  they are always something always goes wrong but it’s extremely reliable and it  has a consequence that your child will have an expected in increased life  expectancy or an increased health span so so staying alive healthily, of 30 years in the average.

 So you make the change, and it’s guaranteed  that your child, or the chances are extremely high  that your child will have an increased health span  by 30 years.  And there are hardly any side effects,  hardly any risks connected to that.  Should that be a moral obligation?  Is that a moral, would it have to be a moral obligation?  Would you, would you not use that technology if it guarantees that your child lives longer  healthily?  And these are the tricky questions.

 And that’s what many industries  are working on. That’s what many big companies are working on. For example, in the transhumanist  sector, a friend of mine, she’s actually a transgender transhumanist, and she’s the best  earning female CEO of the United States.

 So she was married, well she was married to her wife as in a heterosexual couple for 20 years, then she changed her sex,  she’s transgendered, now she became a woman and now they live in a  in a homosexual relationship, still together with the same wife.  They’ve got a couple of kids as well. And she’s the owner of a big company,  which she actually started because one of her daughters had a life-threatening lung disease  in order to save her daughter’s life.

 And recently, she’s been playing around, well,  she’s been working on xenotransplantation. Xenotransplantation  means you take the genes of animals or of humans and humans integrate them into an animal.  You create hybrids, chimeras, human animal chimeras. And that’s what she did, I mean  we’ve been, I’ve been talking to her about that about 10 years ago already.

 And she recently has created pigs with human genes inside.  And then suddenly at the beginning of this year,  I read the news all over the world.  A genetically modified pig’s heart was created.  So a genetically modified pig’s heart was created with some human genes inside,  and it was taken and integrated and transplanted into a human being.

 And it was a successful transplantation.  The patient, who would have died within a week or two, he actually survived for more than  two months.  And then he was, he did not die as a consequence of sort of the transplantation, it was an  additional bacteria which was transferred which actually caused the death.

 So it was an extremely successful endeavor.  So she realized, as a consequence of a transhumanist thinking,  she realized sort of a hybrid here,  a human heart which grew up in a pig and that was transplanted,  and it helped the person to survive at least for two months.  And that’s sort of breaking away from the traditional ways of thinking.

 And actually, sort of challenging the boundaries of humans and animals,  that’s something many are working on.  And right now, I mean, for example, we take it for granted, you know, we get older, but we might have increased our life expectancy,  but we didn’t increase our overall life expectancy.  So the oldest person, humans only get about 122 in the maximum.

 That’s sort of the really, human genes are not made to live longer.  human genes are not made to live longer.  Does that mean that it is impossible for mammals to live longer?  Does it mean this is really a limitation?  We might maybe get 80 in the average.  We are 80 in the average.  We might get 100 in the average, 110.

 But like 120 is really the maximum.  And 120 years is not that long, actually.  Normally, the younger you are, you think, oh, I’m just 20.  And then you’re 40, and then you still feel like 20.  And when you’re 60, you still feel the same way. And my grandmother was 95, and she was still happy just eating her cookie the next day.

 You know, when she was looking forward to another day,  and when you’re 20, you would say, I don’t want to die anyway.  But then when you’re actually there, you might really think differently about this.  But there are mammals, they are like the Greenland whales who live more than 200 years.  There are some mussels who live more than 500 years.

 And so here, the transhumanist  idea is sort of, well, maybe, I mean, we’ve already successfully created some other  human pig chimeras. Maybe we can use the genes  from, which are responsible for an increased lifespan from the sharks or from the Greenland  whales, transplant them into humans, and then we might increase our average life expectancy.

 You know, we might also get 200 years, 300 years. The working or undoing aging, that’s another,  that’s sort of the story. We need to rethink how we think about aging. You know, my student’s about 20 years old,  and I’m telling you, you know,  you’re at the peak of your flourishing.  Realize that.  Use your time now.

 From 20 onwards, everything will go downhill.  Wrinkles appear.  Hair is turning gray.  Your energy is getting less.  Your stamina as a man.  From the age of 20, as a woman, sexual arousal declines.  From the age of 30 onwards, everything’s going downhill.  Use your time when you’re young.  And we need to rethink that.

 We need to think, well, these things which are normally identified with aging,  with what we traditionally see, wrinkles, gray hair,  that’s just a typical sign of aging.  Maybe that’s the disease.  And there have been biologists who’ve analyzed that,  who’ve realized actually these dysfunctionalities  which come about, which make your hair turn gray,  these are the same procedures which cause cancer,  Alzheimer, Parkinson in the long-term future.

 So the same dysfunctionalities which lead to gray hair and wrinkles have  really bad implications in the longer-term future. And so by undoing  these stuff which we normally associate with aging, we might significantly increase  our health span.  And many anti-aging researchers, many companies in the truck companies are working on that.

 And then you can prevent the wrinkles and you can stay younger, but most importantly,  you can prevent getting cancer in you can stay younger, but most importantly, you can prevent getting cancer  in the first place and Parkinson’s.  And that’s really improving our…  would improve our life expectancy enormously  because the majority of people…

 I mean, we all have very idiosyncratic understandings  of what it means to live a good life.  If I ask around you and everyone,  what are your preferences,  there’s maybe… there’s probably not one thing which we all have in common.  We all have very special idiosyncratic understanding of what it means a good life.

 However, maybe 90%, a great percentage of you would at least say,  well, being healthy is in some way important.  Maybe it’s intrinsically important. Maybe it’s intrinsically important.  Maybe it’s instrumentally.  Maybe it’s just a means because then you can travel and have some drinks and so on.  But it’s in some way important.

 And that’s why undoing aging or increasing your health span is such a significant,  is such a fundamental goal which many transhumanists  try to promote.  So we shouldn’t only deal with, you know,  do cancer research and so on.  We should continue, obviously.  However, really starting to get to deal with the problems,  the physiological dysfunctionalities when they occur,  which have to do with the gray hair and wrinkles,  undoing aging when it occurs,  so that sort of your biological flourishing at the age of 20 will continue up to 30, 40, and so on.

 And that’s really rethinking of many of our established paradigms, and more and more people,  I mean, you know, this is something, it’s not, you know, I’m summarizing the people,  but from research, that’s not me talking about, you know, what no serious biologist has done.  David Sinclair, who’s, you know, who’s a distinguished professor of biology in Harvard  University, that’s what he’s suggesting.

 And there are others, you know, this is all based on the leading thinkers, the leading  scientists, the leading entrepreneurs of our time  actually associate with that transhumanist understanding.  In Silicon Valley, there are many of them  who actually associate and see themselves as transhumanists.  Ray Kurzweil, who’s making the strategic decisions  for Google, is a transhumanist.

 Elon Musk, obviously the most famous one,  and many, many others.  So just to show the relevance.  I think it’s, I fully agree that we should permanently  look for the limits where we should still  think about the ethical aspect.  Because you say, well, we don’t have Oregon  in the Czech Republic, like in Argentina,  but we have special law protecting animals.

 It’s not on the way you are thinking about or on the level.  But still, the living organisms on these planets are also viruses.  It’s scientifically proven that even viruses can communicate globally.  That’s why we have modified versions and so on and so on.  So where are the limits where we should really respect the animals as equal creatures on  the planet?  Where is the limit according to you?  I don’t want to give an answer to that question.

 It’s not me.  It’s not the traditional.  It’s a philosopher like in Plato, you climb out of the cave, you see the sun and speak to the masses what the real answer is.  That’s not how, you know, in ethics how it works.  It needs, you cannot just impose values on people.  That’s just the wrong way of doing it.

 We need a discourse.  And the values which might be right for one country might not be appropriate ones for a different country.  And it is clear  some questions which can be asked, for example, and have been asked in Iceland, could not be asked in Germany because of the  Third Reich. There are other sensitivities and they need to be taken into consideration.

 So that’s why I don’t think there’s any firm goal which we can aspire to or any, there’s also no stable utopia which can be  reached. Any utopia is highly dangerous. Whenever you say, oh, look at this is a technological  techno-progressive society and this is what we need to achieve then I’m telling you this will lead to a totalitarian society on an unprecedented scale.

 This is highly  dangerous. There’s not one answer. We need to make the norms and  values which are appropriate for the specific region, for the specific  country. The one thing which I’m really trying to highlight and trying to show the relevance of is the  importance of, in philosophical terms, it’s the importance of negative freedom.

 Negative freedom  means the absence of constraint. I just think it is, historically speaking, that we, each one of us,  that we, each one of us, has the right to live according to his or her understandings. What you dream of at night, what you long for, if you don’t directly harm another person,  you should have the right to live according to that understanding.

 And I think that’s an enormous achievement.  And that’s not been present in most societies,  and it’s still not present in large parts of the world.  When you look back in history,  it’s always been the political leaders,  it’s been the religious leaders,  and they told you what is my understanding  has to be your understanding of the good life.

 And that’s fascism, that’s totalitarianism, that’s paternalism, that’s terrible, that’s  highly dangerous, that’s making, you know, universalizing our tribes, affects our wishes.  But our tribes and affects what we want to in lives are extremely diverse and the state should not tell you how to act if it concerns yourself  or if you don’t harm any other person by doing so. That always has to be prerequisite.

 And of course,  what that exactly means, when do you, how can you, what actually constitutes harming another person,  When do you, how can you, what actually constitutes harming another person, that needs a dialogue. But I think it’s extremely important.  And you know, having grown up in Germany, I’m particularly aware of the atrocities which  go along with totalitarianism.

 And the one thing which we need to prevent is any totalitarian system to come about. We need to cherish the plurality and that freedom is an enormously important achievement.  We need to fight for that freedom and that people have the right to individually make a decision about their own lifestyles.  Because that’s such a precious important, because that’s what in the end counts.

 Such a precious important, because that’s what in the end counts.  That’s what matters for all of us, because, you know,  what we all want to do is to live a good life according to our very own understanding. And if a government tells you you’re not allowed to do so,  or you might have sanctions, you might not get a job,  you have to feel bad about that if you’ve got certain desires,  you need to press them, you must  not utter your wishes in public, that’s really dangerous and that’s undermining, you know,

 you feel unfree when you live up in such a society.  That’s why I think freedom is a wonderful achievement, but it’s an achievement.  I’m not claiming even freedom is an absolute value,  is something which is, you know, which is somehow floating in a platonic realm somewhere,  which is eternally valid. I think it’s a matter of, we are fighting.

 We are fighting for norms  and values. But I think that’s a value which is worth fighting for. It’s not, freedom is not self-evident.  Freedom is not something which has any higher status,  but I think it’s freedom is something  which we should cherish because it really increases  the likelihood of the majority of us living good lives.

 And so that’s why I don’t give answers.  But the one answer I’m saying,  please don’t underestimate the importance of freedom for a society and try to open up the  possibilities. I think that’s the issue because we are here in the Czech Republic and we have  experiences with totalitarian system as well.

 Just 30 something years ago we got the freedom  and we enjoy it so far, but we should live in reality.  There are many countries around, even in Europe, where there is autocratic systems and autocratic leaders  and if we talk about the development of technologies  in many countries, there are already limitations  done by politicians because they don’t want to support  this kind of development.

 And this is one of the, I think,  one of the greatest challenges  which goes along actually with technological developments.  I mean, sometimes in the public or in media  when you hear people talking about transhumanism,  well, some claim, or you might think,  well, the most important  issue is whether superintelligence will develop which will place us  humans in the zoo, which, you know, which will try to kill us humans.

 That’s not  one of the most, I mean, that’s just, you know, I’m not excluding the possibility,  but that’s not going to happen within the next decades. However, what is really  a significant issue which we all need to  think about is how do we want to deal with digital data? Digital data is being  collected and there’s an enormous power and we don’t even realize when  you choose, you know, on Messenger sending photos, exchanging personal notes, having  video chats, that’s all, you know, it’s being collected now  by the messaging service, by Facebook, by Google.

 And if they want to look at what you’re exchanging,  we need to check whether that has some security relevant issues  that undermines the structures that goes against our standards.  And they can look at the pictures you’ve been exchanging.  And once you realize that, you might think twice about what you exchange.

 Anyway, there’s a lot of power.  There’s a lot of power which goes along with data.  Many economic systems say data is a new oil.  And, well, data is not oil. Oil is a natural resource and data is an intellectual property. However, the function is the same thing. Data,  whoever has access to the data can use it, can use it against you. There’s power, there are finances.

 Any research, social sciences, natural sciences, any policymaking,  marketing is only dominated on the basis of collecting digital data.  So how should we deal with that issue?  Well, we’ve got on the one hand in Europe, it’s very difficult to collect that data.  We’ve got the GDPR.  We’ve just made legal regulations which makes data collection really difficult.

 However, we all freely agree to have, you know, Google and Facebook collect our data.  And we’re not worried. And you leave your smartphone next to the bed.  Anyone can turn it on and, you know, live stream whatever happens there or collect the data.  We don’t even think about all the implications.  We freely agree.

 We think, oh, it’s a free goodies, which Google gives us.  They are not free goodies.  We are the products.  We are working for them.  These big companies benefit enormously from that  because data is so relevant.  But we’ve got another player in the field,  and that player is even more efficient than the United States or than Europe,  and that’s China.

 Because here we’ve got a regulation in China with a Chinese social credit system.  They can collect all the data.  They’ve got the political right.  Only companies are supposed to work or function in China which deliver or which grant the government access  to the data they have.  And so they can collect data from various fields  and they can use it for research purposes and so on  and also to suppress, you know,  to keep their political system intact.

 But the impact goes further.  I mean, it already has impact on the sciences.  For example, the amount of peer-reviewed publications in academic journals, the number one in the  world is no longer the United States.  The number one in the world is already China.  So China is making loads of money, gaining loads of  power as a consequence of having the possibility of collecting the digital  data. So, and we in Europe, we don’t collect.

 It’s more because of the GDPR,  it’s more difficult to collect data in Europe. We are not using the data. But if data is so important for finances,  for economic well-being, is that the right way of dealing? Don’t we have to rethink it?  We need money for our financial well-being. We need money to pay for the public health insurance.

 If the money goes to China,  money can only be spent once. Then, you know, that has impact for our flourishing, for our future.  We desperately need to think about the meaning of digital data. And it not only has a relevance for  how well our economic system, how well off we are financially  in Europe, it also has consequences for the political regime.

 So the money gets,  you know, they get, they collect the data, they get richer, more powerful. And they  not only use it in order to make their citizens richer, but they also use it as a government to expand their political system.  They can use it to spend it on military.

 They’ve created the New Silk Road, the New Silk Road,  which, you know, it goes via many different countries in Africa. They’ve got an enormous  influence in various African countries.  And if they have the money, they spend it,  invest it in these countries,  their political influence gets bigger.  And their political influence also means  the military expansive.

 That means they have got authoritarian regimes.  So by not collecting the data, it’s not that we are simply better off.  We might actually be worse off, and China will be better off.  And with that, and together with that, a political authoritarian system might expand further,  which strongly undermines negative freedom or freedom as an achievement.

 So that’s why I don’t, don’t be afraid that like a super intelligent algorithm will put  you into a zoo.  You know, that’s not going to happen in the next decades.  You know, I’m not saying it can’t come about, but I think we urgently, we really urgently need to rethink how we want to deal with the data.

 And because it’s important for our financial well-being and it’s important for the political system which we live in.  And if we don’t want to end up in an authoritarian system, in a Chinese-dominated system,  then I would strongly recommend rethinking  who we grant access to the data and how we want to use it.

 Thank you.  We are coming to the end of the session,  so I would like to also give the space to the audience  to maybe ask some questions to you.  So if there is any question,  there I can see the gentleman.  Yes?  Thank you.  Thank you.  Many, many things.

 technology increase capacity in the environment to sustain human life?  Many things. That’s an extremely important question, obviously.  So the question is exactly  what is the trade-off?  Is it really?  That’s sort of a general  often used response.  By using the technologies  in our human interest, aren’t we destroying the  environment? Isn’t that leading to climate change and so on? The question is the following.

 We need, what are the consequences? Well, how can we regulate the usage of the technologies?  Can we implement, if we implement something  in the European Union,  what are the consequences  for the rest of the world?  In China,  several hundreds of new  coal-powered power stations  are being built.  India, the same thing.

 The consequences  for climate change  are enormous.  If we just shut down some coal- station, they build 100 U-1s.  We cannot undo what is being done in bad consequences in other countries.  So the question is, how can we solve that issue?  Well one solution would be to establish a global government and tell everyone what to do  That’s not a likely option and that’s not  That’s not really feasible option either  What is one of the major drivers of the climate change of destroying the environment?  Well one of the major drivers is well are humans because we as humans

 Well, one of the major drivers is, well, are humans?  Because we as humans, consumption of red meat and so on, using gas and so on,  we are the major drivers of the climate change.  What would be the implications? If you think that climate change is a major issue,  then you at least personally, would you have to stop from reproducing?  Because by reproducing, you create other humans.

 Humans are the major reason for creating,  causing the climate change.  Should we then, if humans are the main problem,  should we implement new eugenic systems,  stopping humans to reproduce,  implement a one-child policy, as we’ve had it in China for  some time? Should we implement that in all over the world? That’s again, that’s neither an issue,  that’s neither a consequence which is desirable, nor is it a realistic consequence.

 That’s why  I’m suggesting, no, we need to find a a better solution we need to find a solution which basically gets freely adapted all over the world why because  it’s better and one of the examples would be for example the richer a  country gets the richer population gets the higher the consumption of red meat  red meat is simply simply identified with luxury.

 I’m better off, I want to have my steak. And that’s what’s happening in China. And China is getting richer,  India is the same thing. And the meat consumption increases. And the animal factories, the suffering, the  slaughtering, is increasing together with that. And that’s a major challenge.  Do you want to go to China and forbid them to eat meat?  That’s just not a realistic option.

 So that’s not only not a realistic option,  it’s also a highly paternalistic and maybe even totalitarian action.  So that’s not a plausible implication of what we can do.  So we need to find a better solution for dealing with these issues.  And one of these solutions and one of these alternatives, for example, is  instead of  having more animal factories and so on, well, we create in vitro meat.

 And we’ve already got in vitro meat meat which is being offered in burgers in Singapore and  they can already be sold and they’ve got enormous advantages.  So we don’t need to kill, we don’t need to have animal factories with all the carbon  dioxide emissions, we don’t need to pollute the soil with the urine, and we don’t have to give antibiotics to the animals  because they all live in these terrible conditions,  and always some animal is ill.

 That’s why everyone needs to be given antibiotics.  They all need to be given antibiotics all the time,  which again has consequences on creating antibiotic-resistant cells.  So by getting rid of that, by having  in vitro meat as an alternative, we can still offer, you know, humans the possibility to eating  meat, which is identified, you know, then with luxury.

 But without having the challenging aspects  of carbon dioxide emissions, without having the problematic implications of increasing  the likelihood of creating antibiotic-resistant cells.  And this is just one example.  So that’s why I’m saying, you know, either we have the option of creating a global government  which implements eugenic rules, forbids humans to procreate, which I just don’t want to live in such a system.

 It’s neither realistic nor it’s desirable,  or we create technologies which work better  and which have better implications  and which have adapted by free choice by the people  in the various parts of the world.  And that’s why I think this, you know,  it’s not the perfect solution, but it’s as good as it gets  because the other alternatives don’t seem to be feasible  or desirable from my understanding.

 We talk philosophy, as you can see,  every answer is quite long and complicated.  We are coming out of the limit.  I think Stefan is still here,  so we can switch to informal discussion  if anybody wants after this session to talk to Stefan.  I would like to thank you for coming to Ostrava.  I hope that it’s not for the last time  and we can discuss transhumanism in the future as well.

 

We Have Always Been Cyborgs: Professor Stefan Lorenz Sorgner – John Cabot University

Transcript:

 Emerging technologies radically change the way we live. They have an effect upon economics,  they have an effect upon the arts, but also on our everyday lives. The basic underlying idea  is that we’re using emerging technologies in order to alter who we are, to break beyond the boundaries of  our current human limitations.

 And I’m presenting a way of approaching these issues, a new solution concerning how we should  revise our structural, our societal being together.  What I’m presenting in this new book is actually that we’ve always been cyborgs. We develop a language by means of our cultural circumstances, by means of our parents.

 We need to break away from the encrusted structures of our cultural past and need to develop a  new ethical understanding for dealing with these challenges.

 

(3) The Future of AI: Economics, Alignment, and Extinction – YouTube

 

Transcript:

 Hello everyone, welcome to Reinforcement Talking, a new podcast series from the UCL AI Society.  I’m very excited to be with my co-host and friends Javier Torre and guest Wim Norde.  Wim is a scholar, entrepreneur and politician and is currently working as a visiting professor  at RWTH, Arkan University,  and is a distinguished visiting professor at the University of Johannesburg.

 Wim served for five years as an elected counsellor for the African National Congress, ANC,  and is now a candidate for the Green Party in the Netherlands.  Wim has founded several startups, including in workplace analytics and sustainable development,  and is ranked  by Stanford University’s rankings amongst the top 2% of scientists worldwide.

 Wim, it’s  great to be with you.  Good morning. Thank you very much for the invitation.  Amazing. So today we’re going to be talking about AI and economics, in particular about  AI alignment, and this will be with me in the first half and then we’ll be going  on to speak about transformative economic growth with Javier in the second half.

 But before we  jump in, Wim, could you explain a bit more about your background and how you’ve become interested  in these sorts of topics? Yes, thank you. Well, I grew up in South Africa and I worked early in my career.  I was really interested in topics of economic development, political liberation, and what that means for economic growth and for economic development.

 And as an economist, I studied the fundamental challenges that are faced by countries and people in this development trajectory, and ask the question, what leads to economic development?  And of course, one of the answers is that it’s technology.  It’s our innovation capability as human species that allows us to transcend boundaries, to overcome difficulties and to enable a better life for all.

 difficulties and to enable a better life for all. Now, of course, this technology has to function also within institutions that we create rules, regulations, we need to still solve conflicts  about distribution of resources, etc. So it’s not a simple answer, but my interest was really  piqued by the role of technology in human development, because clearly over the last two, three hundred years, technology enabled us to do an incredible amount to achieve quite  a lot.

 We probably, well, we are the richest generation ever to walk this planet with incredible capability.  The question is, is this technology, is it not only a good thing for us as it’s not only  uplifted human species, but it is also posed some existential risks.  So we obviously have the challenges relating to climate change and whether we can sustain our environmental footprint.

 Our impact on resource use on this planet is a very relevant question.  But there’s also the question regarding artificial intelligence, that maybe the technology that we create will in the end turn on us, you know, so we can either wipe ourselves out through destroying the planet or by destroying ourselves through the artificial intelligence.

 So these are the two kind of like existential risks that are very much, you know, currently in vogue, getting a lot of concern.  concern. And in my paper that we’re going to talk about today, I also talk about the bigger picture,  you know, in terms of the Fermi paradox, we don’t see other civilizations like ourselves in the universe.

 And it has been proposed that technology may be, you know, kind of like a mechanism which  leads all societies at some point to become intelligent enough they self-destruct. So,  so I grew up being intrigued by these type of questions and  I followed, you know, I studied in the United Kingdom in the University of Warwick economics,  specifically mathematical modeling and built, you know, kind of like large-scale models of  economies back in the 1990s, which was then kind of like, you know, using the big data that was  available at that time and the mathematics to simulate these artificial economies to try and make decision making.

 So I still find it very interesting to look at the way in which now with big data and smart algorithms and data science,  we have gone to build much more realistic models in which to kind of like, you know,  make simulations of policy and trying to expand our knowledge of how to steer the global economy with all its complexity.

 Yeah, I’m very looking forward to getting into all of these topics  and hopefully we’ll have time to speak about all of them. So starting off, so what is this term,  the alignment problem? It’s a word that I think is gaining more and more traction at the moment.  So what do you understand by this phrase?  And why is this a problem that we should be putting our attention to?  So if we are developing really very smart artificial intelligence systems,  which the objective of computer scientists and other people working  in the artificial intelligence area, their objective is to design autonomous intelligent

 agents.  This is to say, we’ve got intelligent agents that can make decisions on their own.  Now, of course, the question is, how can we make sure that in making these decisions,  these autonomous intelligent agents  will do what we want them to do? So, this is basically what the alignment problem is.

 How do  we get smart, intelligent AI systems that are autonomous to do what we want them to do?  This is the basic nature of the alignment problem. We can also move the alignment problem,  refine it a little bit and say, well, we want these autonomous intelligence systems to do  things that are aligned with values of humans, which then begs the question, what are these values and whose values?  So I generally tend to prefer to look at the alignment problem as the problem of getting AI to do what we want it to do.

 So, for instance, some companies may want AI to help them sell products and they would like to know that AI is aligned with that.  Some governments may want AI to help them improve social assistance and maybe the  targeting of welfare payments. So they want to align the AI with that objectives.

 Credit card  companies may want to have AI to find fraudulent, possible fraudulent transactions. So they want to  make sure AI can do what it is supposed to be doing. So that’s the one point. If we then look  at the values of humans,  then often it’s been said that in pursuing  these kind of like goals,  if we get AI to do what we want it to do,  we also should try to avoid any negative,  unintended consequences or negative externalities.

 And these are typically not problems  that are unique to artificial intelligence.  We have seen with all technologies in the past that there are always unintended side effects and externalities.  So we can take electricity as an example.  In the first few decades in the end of the 19th century when electricity became available in people in houses,  many people were electrocuted.

 In fact, I’ve read somewhere, I don’t know if it is true,  that at some stage in London they had copper plates for the switches of electricity in houses,  right? Not being aware that copper is a super conductor of electricity and not very, you know, good practice.  But people learned and slowly we addressed the technological externalities from electricity  with new technologies, you know, switches that prevent or minimizes the chances of being  electrocuted.

 We’ve seen the same with automobiles.  You know, the number of people who died in road accidents as a proportion of cars on  the roads fell dramatically since the first cars went onto the road.  And so we have the same type of unintended consequences with artificial intelligence.  And part of the bigger alignment problem is to make sure that also these unintended consequences and negative externalities are limited and eventually completely eliminated.

 Some people think that the externalities from the alignment problem go beyond the risk from  people being electrocuted or there being accidents on the road, and that this could be an existential  problem.  What are the main arguments for this being an existential risk?  Yes, that’s a good question.

 Of course, that is, you know, so when we deal with electricity,  we may see that electricity doesn’t have its own type of objectives. It doesn’t have its own,  you know, intentions and goals that it wants to pursue. So we just have to make sure that we  handle electricity in the appropriate way.  With an intelligence, a super intelligence or an artificial intelligence that’s autonomous, that can learn by itself.

 So the modern approach to artificial intelligence, machine learning, deep learning, allows agents to learn.  And if they are smart enough, they will eventually recursively self-improve.  are smart enough, they will eventually recursively self-improve. And that means they can become as intelligent as humans, at least philosophically or in theory.

 And they can even very quickly  become even more intelligent than humans. Now, there are two kind of like threats in this. One is that once you find a recursive self-improving AI, it may just become  so intelligent so quickly that we cannot do anything about it in time. So there’s not going  to be any fire alarms to say that there’s going to be this very intelligent system that’s going to  come into existence.

 And secondly, if we cannot align its goals, if its goals is diverging from what humans  want, so it doesn’t do what we want it to do, and we cannot get it to do what we want it to do,  then it may either deliberately or unintentionally or as a side effect, cause humanity great harm.  And it can come by various ways. It can just be that there’s kind of like science fiction horror scenario of an evil AI, which perhaps sees human beings as a threat to its own existence.

 In other words, maybe we don’t like it.  Maybe we’d like to pull the plug on it.  And so preemptively, it decides to eliminate humans as far as possible.  It may also be that it’s completely unintentional.  So we’ve got all these discussions about the sub goals or the instrumental goals that AI  might follow.

 So for instance, if AI learns that humans like strawberries, that’s the example that’s  been given, it may just decide to use all of the arable land on the planet to plant strawberries  so that we can get as maximum amount of strawberries as possible, right? So that would  be, that’s kind of like a caricature of what may happen.

 But, you know, the general, you know,  also we have the paperclip maximizer case and the King Midas examples of AIs that, you know,  because they have an ultimate goal, human happiness, and they see that something specifically gold or strawberries or paperclips make people happy, they use  everything they can to provide that, and in the process, create quite a lot of havoc.

 But it can also be quite kind of like, also kind of like beneficial.  So if you think about Huxley’s novel Brave New World, all the humans are kind of like very docile and they are chemically in a state of bliss and they are completely useless and no threat to the authorities.  Because they’re essentially what’s known as being wire-headed.

 And it can just be that AI, such a clever AI or super intelligence, doesn’t know which buttons to press and how to create through the human brain these states of bliss and permanent dopamine rushes that we are on, that we literally become just hooked to ICT systems that directly provides us with all this bliss in our brain and we don’t need to do anything else, so we basically become, as I put in my paper, become wire-aided to kind of  doddering Neanderthals has been one way that one author has put it. So these are all kinds of ways

 in which an artificial intelligence could have a,  it’s also a related problem to the alignment problem,  which is called a kind of like a political problem, which is, you know,  say for instance, there is the super intelligent artificial intelligence,  but it’s somehow steered to only maximize the advantage of certain people.

 You know, so say for instance,  one country or one group or one company manages to get such an AI created and that AI is maximizing only the benefits of those people.  So it can create huge amounts of inequality in the world and it could enslave part of humanity, at least in theory.  So this is also a type of unintended consequence or a valid, a bad use of AI for political reasons.

 And we already see, you know,  even our simple narrow AI that we have at the moment can create quite a lot  of havoc if it’s put into autonomous lethal weapons and other kinds of,  you know, weaponized systems.  So maybe that is a potential threat.  So this is why there is the concern  that it may create an existential risk.

 Yeah, that makes a lot of sense.  And I think it will be very useful  for listeners who hear this problem  and don’t instantly assume  that it could have such massive ramifications.  So turning back to the alignment problem,  there’s at least two threads here.  I think the way you put it in your paper is  there’s what decision theory we want an AI to follow,  what ethic, if you like, what ethic we want to encode.

 And then there’s the technical problem  of ensuring that the AI actually follows this.  So what do you think economics can contribute to either of those problems?  Yes.  So if we put ourselves in the position of an artificial intelligence scientist who aims to build these autonomous intelligence systems, there are two questions that we need to answer.

 The first is, as you mentioned,  what decision theory do we want such an AI system to follow?  In other words, how does it decide what to do?  And secondly, you know, the second question is,  how can we implement such a decision system  once we have such a decision system? Now, economics can  provide, you know, it’s already providing guidelines and inputs into that first question,  what decision theory do we want an AI to follow? With the second question, how do we implement  that decision? Economics, in my view, do not yet provide sufficient answers. So far, economics have

 been focusing rather strongly on the labor market implications of AI, in other words,  on automation. And this has been stimulated by potential concerns that artificial intelligence would lead to high unemployment.  So the bulk of work in economics has been trying to address these labor market implications.

 But what economists have done is they’ve neglected two aspects.  They’ve neglected precisely this alignment problem that we have talked about so far.  Economists have not weighed in significantly on the alignment problem that we have talked about so far. Economists have not weighed in  significantly on the alignment problem.

 And secondly, what we’ll talk about later,  they’ve also not weighed in very significantly on issues of the singularity and the existential  risk of AI. But they can make a bigger contribution to the alignment problem because the fundamental basis of decision theory  that are being used by AI scientists is something that comes from economics to a certain degree.

 So you could say that economics provides the core of decision theory that are being used by AI  scientists. So that’s my call, is that economists  engage much more with scientists in AI to elaborate and improve their specification of  decision theory.

 Now, coming back to the question, what decision theory do we want an AI to follow?  Well, of course, we want that to be a rational one. We don’t want to start off and say we want an AI to follow? Well, of course we want, we want that to be a rational one. We, we, we cannot,  we don’t want to start off and say we want an irrational AI autonomous agents  out there in the world. We want them to be rational.

 And this concept of rationality is is actually economics is main export to the  other social scientists, social sciences and other sciences.  This is actually what what Herbert Simon specifically, you know,  was also pointing to.  And this is also known as the homo economicus,  the rational economic person who takes rational decisions.

 Now, how does one take rational decisions?  And how is this modeled in  economics? Well, the scientist, John von Neumann, you know, also famous for computer science and  contributing to nuclear physics, etc. And Oscar Morgenstern in 1944, they set out expected  utility theory as a foundation of neoclassical economics in their book on,  which basically also introduced game theoretic analysis.

 And in the 1950s, this was generalized  by Jimmy Savage in a very influential book on decision theory. And essentially what von Neumann  and Morgenstern and Savage did was they proposed that people would maximize  expected utility from a particular, when they have to make a certain decision. And this expected  utility basically says that people have utility functions.

 And this utility function just allows  you to make a comparison between different outcomes from actions, right? So you do this,  just allows you to make a comparison between different outcomes from actions, right? So you do this, and then this happens, and you can rank that on a preference scale. So you can compare  the outcome of different actions by how much they, you know, conform to your preferences.

 And you would choose the kind of actions that lead you to enable you to enjoy more of your  preferences than rather than less.  That is also part of the rationality. And there was a number of kind of like theorems that if,  and conditions, if they all choose that utility, expected utility maximization is the most  rational decision to be taken.

 And this idea of rationality and expected utility maximization or optimization,  this is precisely what we’ve seen in artificial intelligence systems as well. So artificial  intelligence systems, they all have an objective function, a goal or a loss function that they try to maximize within using data, for instance,  looking at the data, looking at making a decision  and predicting what is the right decision and minimizing the lost function.

 It’s kind of like just the same thing as maximizing a utility function  or a goal function.  So you could also see utility functions as the goal function  in the language of artificial  intelligence systems. So that’s the essence, you know, what economics have contributed to the  decision theory of artificial intelligence.

 In terms of the second question, we can also talk  about that, but that’s then essentially how to implement such a system in AI.  Thank you. I find this something really interesting and the idea that social sciences,  which I’m personally very interested in, can make a big contribution here, I find very exciting.  So we got this idea of homo economicus and maximizing utility.

 Several  things come to mind here for me. Ellie, you mentioned we need to ensure that utility being  maximized is not just limited to a particular group. I think this is very important. We don’t  just want it to be the utility of the owners of the AI, for example. But also, I think what I find difficult is  how we can actually conceive of what this utility is, because there’s many senses in  which we can understand utility, right? You know, it could be our preferences, it could  be our values, it could be our intentions. And so how do you make sense of what human utility is?

 There’s both a strength and a weakness in the approach.  And it’s precisely because it’s amenable to broad interpretation and broad application.  So the benefit is that with utility,  it doesn’t matter what you put in your utility function.  It can be anything that you have a certain set of preferences over.

 And it’s not so difficult to envision in different cases  that we have these different set of preferences, right?  And these preferences are particularly, you know,  important to the extent that we can compare preferences.  So we know which preferences would give us greater satisfaction or utility if we were able to achieve these.

 And utility would be completely different for different people.  different for different people. This is also why the, you know, initially, the whole thing of expected utility maximization and optimization, it actually comes from gambling in the 18th century.  Gamblers were concerned about, you know, how do they take the best decisions to, you know, to make  the most money from betting their money, you know, so it’d be kind of like dark origins of something very good  in social science today or very useful. So the gamblers focused on this problem. And initially,

 the idea was that they would maximize the value in monetary terms. But that ran into all kinds  of problems, including Pascal’s wager. And this was essentially solved by substituting value for utility.  So this means that money will give us utility,  but other things will also give us utility, not just money.

 And by even increasing money,  the utility that we get from increasing money may be decreasing.  So if you’re a billionaire, an additional euro or pounds income will probably  not result in as much marginal utility to you as if you were much poorer. So this game is a much  better handle on dealing with different types of goods that we desire or that we use.

 So this is the usefulness of utility  function. Of course, when you put it also in the context of the alignment problem,  then what we essentially see is that we have different agents with different utility functions  and that there is the allowance for different utility functions. So my utility function  will be different than your utility function because I may have different preferences.

 And so we function quite well in society with different utility functions, you know, relatively well.  Now, I said that the idea of utility functions and the maximization of utility functions,  expected utility theory came from, you know, gamblers who were trying to get the best out of gambling  and make most money from  placing bets in the optimal, rational way.

 Now, of course, if you play against the dice,  you play against nature, you have some objective probability that you are facing.  But when you are in a situation where there are different intelligence agents who have different  utility functions, then you are in a multiplayer situation or a multi-agent situation.  And when you are in a multi-agent situation, it becomes increasingly important to also have what is called utility function coordination.

 Because obviously, if I have certain preferences, it differs from your preferences, we may engage into conflict.  And this is also where von Neumann and Morgenstern’s contribution came in,  in game theoretic analysis, because in game theoretic analysis,  you assume that your behavior will influence somebody else’s behavior.

 And that will again, influences your behavior.  So we are in the strategic interaction and this has broadened decision-making  theory significantly also to, to get, to get,  to come to the important insight,  which is also relevant for AI today,  is that if each and every one of us individually  attempts to maximize our own utility,  this is not to say or to guarantee  that society as a whole, on the aggregate,  will maximize its social utility  or achieve the best outcome.

 Because of the negative externalities, for instance, in each individual agent’s type of calculation.  So a classic example of where individual maximization or optimization of utility give rise to a  suboptimal outcome on the broad aggregate level is of the prisoner’s dilemma. If each of the  prisoners do what’s best in their interest, that will not necessarily be the best outcome for all.

 And to a large extent, we can also see a lot of the discussions today in terms of our  capitalistic system and with climate change as individual maximization,  which runs up into social suboptimal outcomes.  And this is very much a problem in the fundamental problem in AI alignment is that creating different  agents, intelligent agents with different utility functions that may not or may not be aligned  will create a large degree of potential conflict also with humans, but also between  potential different artificial intelligence agents. And that’s what’s really, you know,

 as you say, this is what really makes it a very difficult problem to solve.  And I think it’s also why we still are, in my view, very far away from solving the alignment problem.  So as you say, given people’s preferences are very different from each other,  you say this is a big strength actually of incorporating utility  into the alignment problem.

 But because people’s preferences are very different,  they’re not always going to cohere with each other. So some people’s preferences might have  to be weighed off against other people’s. One response to this might be to say that AI shouldn’t  be doing this sort of moral calculation of people’s preferences against other people’s, especially if human rights come into play.

 Yes, indeed. I think to highlight what you are saying and the importance of a rights-based approach is to take just one step back and make a distinction between narrow AI and artificial general intelligence.  So with narrow AI, this is the AI that we currently have in the world.  Although it’s very impressive what we already do with AI, it’s not intelligent.

 It’s just basically a prediction and great recommender systems that we have, right?  So, TADGTP is nothing but a big recommender system, which Noam Chomsky has recently called a huge plagiarism machine.  And we have to contrast that with artificial general intelligence would be like a super intelligent system that are as intelligent as humans that can also function in general circumstances, not narrow specific tasks.

 So with current AI, you know, we have general machine learning tools and these tools are to predict something or to find needles in a stack, you know, do pattern recognition and prediction and pattern recognition is basically all that modern AI, narrow AI is doing.  But it needs a human to apply these machine learning techniques of pattern recognition and prediction to a particular domain.

 So if we apply it to natural language processing, we get search engines and we get chatbots.  If we apply it to recognizing road signs and other users on the road, we get autonomous vehicles.  But it’s essentially the same type of system.  Now, in each of these cases where the machine learning, deep learning tools are applied for prediction and pattern recognition to a particular domain,  then we can use the expected utility approach very usefully because we can specify a particular objective.

 For instance, recognize this object. Is it a road sign or is it a person walking into the road?  And so what you want to maximize is the prediction accuracy.  You cannot make a mistake and say, well, it’s a road, but it’s in fact somebody’s house, you know.  So you need to be accurate in your prediction.

 And it’s the same with, you know, predicting what people are going to search for or finding information.  There needs to be a match.  So what you do is you minimize loss.  So it’s very easy to specify to the system, maximize this function or minimize this loss function, you know, subject to the  information that you have at hand.

 And if you have more information, you can update your accuracy  over time. So you become better at doing this. You learn by doing this. Now in these narrow domains  of AI, this approach works great. This is why AI scientists use the approach from economics,  expected utility theory. And in fact, what they have found is that  AI systems are in fact, even perhaps better than humans. Humans have still certain problems.

 We  still tend sometimes to be irrational, and we have also bounded rationality in the sense that  our computational powers of our brain is finite. So if we face a certain problem, sometimes we just  don’t have enough time and so on to calculate the optimal solution. But what we see with machines  with huge computing power today, computing power far greater than the human brain, is that machines  can make better computational efforts at cracking problems, but also if they learn, they can unlearn certain

 biases. So in human decision-making, one of the criticisms against the new classical  expected utility model, especially from behavioral economists like Daniel Kahneman and other very  valid criticisms, is that we have all these biases in our thinking. So we have cognitive  biases and this whole Wikipedia website of 200 plus cognitive biases.

 But potentially AI systems can find these biases in their own thinking and they can get out of these biases.  For humans, it’s very difficult to unlearn our biases.  Our brains are perhaps not so fluid in that respect.  So potentially we have homo economicus much more being reflected in the  world of AI. So if we apply it to these small type of domains, it can work very well.

 And this is in  fact also going back to what Jimmy Savage said in the 1950s with this example of small worlds and  big worlds. And he said, well, these models of economics and of expected utility maximization  work very well in small worlds  where you have everything under control.  You have laboratory conditions,  you can update your probabilities  and you can optimize very easily.

 And so it works very well.  And this is also my argument that, you know,  economics is really good at providing these tools for the current AI that we have.  Now what we need to do is, coming back to your question about the rights-based approach, why it is very relevant for this type of problem.  Because, say, for instance, in the application of an AI tool to autonomous driving systems, whether it’s in airplanes or in trains or in automobiles,  if something goes wrong, if there are mistakes that are being made, human safety is endangered.

 And all the tools can be used for very malicious purposes like cybercrime or cyberwarfare or in  autonomous lethal weapons.  And in these respects, what we need to do, just as we do in other cases of other technologies,  we need to make sure that human rights and the law is not violated by these new type of tools.

 In other words, if the tools allow us to commit crimes in different ways.  That’s still a crime, and we should still make sure  that we can be able to use the power of the law against that.  And people have also argued that we should be doing due diligence  and rights testing, stress testing our system  to see to what extent human rights are being affected negatively  by these type of systems.

 But it needs to be done for each and every domain. You  cannot have a general approach. You say, well, all AI needs to be subject to the following test or  the following type of way. We need to do this. Every time we have a new application of AI to  a particular area, whether there are not intended or unintended negative consequences for human  welfare, and deal with it.

 So that’s very fine and very appropriate when you deal with the current narrow AI.  But say we develop at some stage an artificial general intelligence,  which I also have to stress we don’t have yet now,  and there’s actually disagreement that we will ever find it by using machine  learning or deep learning.

 So not everybody agrees with that. So there’s many scientists and notables who are quite confident that, you know, deep learning will  never scale up to anything like human intelligence for various reasons that we can discuss.  But there are also some people that said, well, it may also be the case that it can  have the scaling up of deep learning to human level intelligence and then with recursive self-improvement the ai can become much more intelligent than humans and then  then it’s kind of what’s been called a singularity and post-singularity we don’t know what will

 happen you know all bets are off what kind of world we will inhabit in such a singularity or  post-singularity uh in environment and this is where the existential threats come in. Now, in such a case,  you know, looking at the human rights approach,  seems probably not to be able to overcome the threats as they have been perceived.

 It seems  that, you know, such a type of super intelligence may be able to, you know, overcome even those type of approaches.  Like we see that in the real life that autocrats very often take over, you know, in governments.  One of the fears, you know, expressed by philosophers that such an artificial general intelligence will lock us in, in a situation of, you know, kind of like an autocracy. This is also what Nick Bostrom has called the singleton, that it will be like one strong, you know, government that’s run by the super intelligence and it will impose its rules, draconian rules, perhaps even on humanity.

 And there’s no way we can get out of that.  We will be locked in forever in this type of extreme dystopia where human rights will probably not count for anything  because we will have lost power to enforce human rights, right?  So human rights need to be enforced, people need to agree on human rights,  and we need to have mechanisms to ensure that we address violations of human rights but in such an AI dystopia with a singleton there is there would  be nothing that we can do we’ll be the ultimate long-term autocracy and that’s been seen in many

 respects also by the long-termist movement as perhaps being the most troublesome potential  threat from an artificial general intelligence.  So I hope with this long, long winded discussion,  I’ve come kind of like to your question that yes,  the rights approach are important, but I think ultimately, you know,  once we’re in a singularity, all bets are off.

 Yeah, that makes sense.  So within the small world of narrow AI,  expected utility theory is useful if it’s, as you say, stress-tested human rights.  But at a larger level,  if AI becomes more generalized,  alignment is much more difficult.  Yes.  So on that optimistic note,  I think now is the time to pass over to Javier.

 And so we can spend more time thinking about  the potential transformative economic scenarios  as a result of general AI. So thank you. Yeah, Javier here and I just wanted to  try to sketch a picture of how a world with super intelligent AI existing would develop  and whether that life would be desirable at all.

 And so the first thing that would come to mind to me  is that you argued that the time it would take  for a narrow AI to grow into a super intelligence  once it has reached the threshold of self-improvement  is very short.  And so I think if the singularity develops so fast,  wouldn’t that inevitably result in one company or institution or country suddenly becoming massively more powerful than the rest?  And so how would countries react to this and what would that imply for the political climate?  that has been discussed as relating to this artificial general intelligence being theoretically possible.

 Personally, I am a little bit sceptic that it will develop so quickly.  So there has been a very interesting debate in the AI circles, which has been, which is known as the FUM debate.  On the one side,  some proponents argued that indeed once AI becomes recursively self-improving,  it will be a matter of weeks or, you know,  at most month before it will become super intelligent.

 And then it’s all over for, you know,  for whoever other intelligences there are,  because there will be a winner-takes-all effect,  right? So if you’re the first company or the first AI that’s there, you can dominate all other intelligences and you can then prevent them from actually developing further.

 So you have  really a strong winner-takes-all potential outcome here. Personally, I don’t think that it will happen so quickly. I think even if  there are companies and governments that are investing in AI, the type of investments that  you need is very large. We know how much Google and Amazon and Facebook spend on AI. It’s billions.

 And it takes a lot of complimentary investments.  And these are not really something  that you can keep completely secret.  So I don’t think you’ll find an AGI  easily being developed in somebody’s garage  and then suddenly just taking over the world.  It’s something that goes by intermediate steps.  And these different steps allows us to check it,  to see where we are going and to be able to take some action.

 So I’m personally not completely convinced that we will have no fire alarm  and that suddenly we would see artificial general intelligence being imposed on us.  But there are philosophers and scientists  who think it may be possible and that because it’s, you know, there’s a small probability  that it could happen, that we should be careful for that, that we should not allow ourselves to  be caught unexpected. And therefore, we should be thinking about the possibility and then, you know, how to make that possibility as

 small as possible. And that then comes to the question of, as you say, the company or the  country you are the first to develop such an artificial gender intelligence may perhaps,  if they can control it, we have to say as well, if they can control it or align it to their interest,  they may then have this winner-take-all effect and they would indeed then dominate.

 And this is also why some people have said, well, we should be concerned about an AGI arms race,  that we have a couple of companies or countries that are now in this arms race to try to develop the first AGI because that will give them supremacy.  They would have the supreme weapon system and the supreme economic system, et cetera, to dominate the world. And that we should be aware that there may be these incentives, even though it may not be realistic to think that may provide a huge incentive for companies and governments to try and invent such a system.

 And at least once you have an arms race, you know, it’s kind of like almost a race to the bottom in a certain extent, because then everybody says, well, we need to do it because somebody else is doing it.  We’ve already seen this with, you know, AI strategies in the world, national AI strategies.  you know, AI strategies in the world, national AI strategies.

 I think there’s an old list of countries now,  maybe a hundred countries or so who each have their own national AI strategy to develop their AI capabilities.  And this is just because other countries have that, you know,  so if you don’t have it, you seem, you are seen to be, you know,  outside of the loop and not, you know, taking proper care of,  of the situation.

 So it may very well be the same that we, that,  that this type of winner takesall approach will lead us to actually misallocate and misspent a lot of funds in the pursuit of this type of system.  So that’s another potential downside of this idea of artificial general intelligence.  Clearly, we need to be very careful about this.

 Next, I want to talk about growth.  And next, I want to talk about growth.  I saw in your paper an interesting concept or rather problem that I hadn’t thought about before and that AI potentially could solve, and that’s the empty planet result.  And so can you tell our listeners, what do you think about that?  Yeah, so let’s just tell what’s the empty planet result.

 So for a long time, until very recently,  people had tended to think about overpopulation  as the problem facing the world.  Too many people on this one small  planet, we’re destroying  the environment, we’re using all the resources,  and eventually, also  in 1798, the Reverend Maltus  predicted that the population growth would  be unsustainable and lead to huge amounts of people dying.

 Paul Ehrlich also in the 1960s published a book, The Population Bomb, predicting famines and huge numbers of people starving to death.  economists, Chad Jones, for instance, have written a very nice paper on this showing that perhaps a more realistic threat over the next 100 to 200 years will be depopulation, that already a  large number of countries, the fertility rate is such that the replacement rate are not being  maintained anymore.

 So especially in the West, I mean, Europe, United States,  the replacement rate or the fertility rate  has fallen below the replacement rate  all the way in the 1970s.  So each generation is about 20% smaller  than the previous generation.  So according to Chad Jones’s calculations,  in around 85 to 100 years,  the population growth will be so small,  basically fall down to zero,  that this will also bring economic growth  to a standstill.

 And eventually, our problem will be an age population and a small population  that are less innovative, because the important thing about people are ideas. So in the typical  endogenous growth models in economics, which Paul Romer got the Nobel Prize for in the 1990s, the key driver of economic  growth is ideas.

 And ideas also come from people, from population, because ideas are combinatorial  in nature. And with more people, you’ve got more ideas and more combinations. And this has been a very strong, you know,  way of driving technological innovation and growth.  This is also what Esther Boserup in the 1960s found in her work that she did  in the Sahel area in Africa,  that population pressure actually pushed economic development through  triggering technological development.

 So the idea then is that with population falling,  there’ll be less ideas and less innovation going around.  Now, artificial intelligence have been argued  to be an innovation in the method of innovation.  So where we have less people around to do the innovation,  perhaps artificial intelligence can step in  and artificial intelligence can,  given that innovation and ideas are combinatorial in nature, AI can perhaps see much more combinations amongst existing ideas than humans can.

 So we’ve seen this, for instance, already in the protein folding problem that AI seems to have solved, because AI can kind of like, you know, overcome this burden of knowledge problem that we face with our limited intelligence with all this type of facts in a much shorter period of time.

 So we expect more discoveries to be made  by AI putting different combinations of ideas together. We’ve also seen this already  coming through in doing literature surveys. So if you’re in any scientific discipline today,  there are more articles published every year  than any human can read in a lifetime.  So how do we do?  How do we extract useful ideas from that?  Well, artificial intelligence,  being strong at pattern recognition and prediction,  may be able to help us to discover new ideas  from our existing ideas and so speed up innovation

 just at the time when we may be  needing it the most yeah that’s really interesting because uh the ability of ai to synthesize  different ideas and to innovate in such a powerful way um exclusive to it um begs the question that in the end, it might be the best tool to regulate the economy as a whole.  And so that raises an interest.

 Will a single, all-knowing, singleton, artificial superintelligence regulator  control inevitably all economic activity in the future?  Yes, that’s a possibility.  I mean, if we move to the singularity Yes, that’s a possibility.  I mean, if we move to the singularity and we do have a super intelligence,  I think ultimately it would be able  to run and coordinate economy  on the scale of the world.

 Look, if you look at the reason  why do we have different companies  in the world today?  It comes back to Ronald Coase’s arguments around transaction costs, saying that instead of us having to have all the individual transactions on a market every day, a company and organization save in terms of transaction costs by internalizing a lot of market transactions in a more efficient way within the boundaries of the firm.

 But the boundaries of the  firm are limited by humans’ capabilities and intellectual and computational abilities to  run firms of a certain size. So firms, all firms, you know, if they’re successfully run, they’ve got  good management capabilities of their leaders, they grow in size. But ultimately, when firms  become too big, they also start to suffer from bureaucratic problems, as we all know, you know,  dealing with huge bureaucratic firms with slow procedures, they are not so innovative anymore, because it’s  fundamentally a computational problem. But if we don’t have computational problems, if AI is really

 very smart, they will be able to run and manage companies that are magnitude larger than companies  that can be run by humans, right? So the coordination problems, which is essentially  information problems, may be able, at least in theory, you know, in such a world, be able to be done by one big singleton or one big company which coordinates everything.

 See, I find that kind of scary.  And for me, it raises two concerns that I have with the possible event where AI controls all economic activity.  with the possible event where AI controls all economic activity.  And the first one relates to humans becoming redundant.  And so there’s a theory in psychology called the self-determination theory that says that competence is one of the basic human needs.

 And this involves feeling a sense of mastery over one’s environment  and confidence in one’s own capabilities.  And it’s usually achieved through professional work.  Now, my concern is if society, if humans were idle in society because all economic activity  and produce were undertaken by a superintelligence, could they remain sane?  And then my second concern is the recent literature in economics, such as, for example, from Raworth, suggests that economic growth should no longer be the desired goal given the sustainability problems we face today. tends to maximize the output and the growth of an economy through super-efficient organization,

 would that even be the desirable objective in the first place?  Okay, so there’s two issues here.  One is human happiness in a potential age of abundance and infinite leisure.  And the second is the desirability of growth  and whether perhaps given planetary boundaries,  shouldn’t we be focusing rather on degrowth?  So these are two important questions  that are being asked today more and more.

 So let’s deal with the first one  in terms of human happiness. Yes, certainly what gives meaning to one’s life is not necessarily having infinite leisure. between humans, how humans evolved, and being very quickly, you know, in a situation where  we have, say, AI taking care of all our needs, and we are, you know, in complete leisure.

 These things may happen too quickly, you know, technological evolution is much faster than  biological evolution. And so you find this mismatch that our brains and our senses were developed for an environment  that were brutish, nasty, and short.  You know, we faced huge challenges in our evolutionary past, and our brains were perhaps  wired in such a way that we find perhaps ultimate meaning by tackling difficult things, by facing  up to challenges.

 And that if we are in a situation where we have all our problems solved,  that may in fact create a crisis of meaning.  And many people already argue that we are already facing,  in rich countries in the West,  we are already facing a kind of like a crisis of meaning.  So I think that is something really to take into account.

 But of course, then there’s also the speculation that if we are in that situation, then perhaps such an AI could create, you know, simulated world  for us. In that simulated world, we could really face all the real challenges that one needs to do.  And who knows, maybe as some have suggested, we may ourselves just be living in a simulation.

 You know, somebody, some other civilization may just want  to simulate our environment to, you know, to perhaps experience what it is to live at this  particular point in time. So, yes, we certainly have to think about those type of issues.  The second issue regarding growth is that with an artificial intelligence, of course,  you will have much more efficient use of resources.

 And I think this is what our biggest challenge  from a green growth perspective is to decouple GDP growth from carbon emissions. And this has  been done so far. If we look at all the Western countries, advanced countries, they have started seriously to decouple carbon emissions from GDP growth. But we need to do this faster, and we need  to do this, you know, completely, you know, efficiently.

 And potentially, this is possible,  because our biggest, you know, economic growth is energy, and even though the energy and mass is equivalent, and we don’t live in a completely  isolated, closed system where we have, you know, limited resources. We are bombarded by the sun  every second with huge amounts of energy.

 And we don’t harness even 1% or less than 1% of all the  energy that reaches the earth. And this is not even all the energy that comes from the sun, right?  So we need the technology to be able to harness the energy that we have already.  And that will give us infinite amounts or almost infinite amounts of energy to be able  to move into an AI type of world, which will probably more digital, right?  So if you read the fantastic book called M by Robin Hansen, there he discusses the potential that if you have a purely digital economy with digital agents and digital beings, digital people, you could have an economy that could double in size every two weeks.

 At the moment, the economy in the world doubles around every 35 years at 2% rate of growth.  So just imagine an economy doubling in size every  two weeks. Largely, it will be decoupled from physical resources because it could be an economy  that’s largely digital and would need perhaps only the energy and the physical infrastructure  investments to a large degree.

 So it could be that in future, the economy is so much different,  just as different as the industrial age was from the agricultural age and the agricultural age was  different from the forager age, that, you know, this kind of mode of growth will be so completely  different than we could feasibly and sustainably grow much faster than we had ever grown before,  you know, even in the past. So people are saying, look, we should be growing not more than 0.

5%  GDP per year. Maybe they are missing out on the fact that we could, you know, in the future grow by factors that are 10 times that amount quite sustainably. So that’s the one thing about the tech. And I’m talking here really about, you know, a little bit of scientific scenarios as well for the sake of, you know, thought experimenting it.

 thought experimenting it. But there’s also another reason why I talk in my paper about we need to think about the optimum growth is that if, for instance, we live in a universe where  there are also extraterrestrial intelligences in the world, these intelligences are likely to be AI  and they are probably going to pose an existential  threat to our civilization on Earth.

 So it is, you know, probably good for us to see if we can  develop our own AI before we encounter a foreign AI to be able to defend us better, but also that  we have enough resources to be able to expand into the galaxy for the future development of humans.  But we are not there yet. We don’t have currently the money and the resources for galactic expansion,  and we don’t have currently the resources and the technology for an artificial general intelligence.

 So if we want to move in that direction, we still need to develop economically to that level where we can afford this expansion and this AGI, because this will be ultimately, you know, in the interest of safeguarding humanity over the long run.  you know, that we are inhibiting ourselves and maybe missing the ultimate existential threat,  maybe coming from outside of the galaxy  or outside of our solar system  over the real, really long run.

 This is a little bit, like I said,  you know, philosophical and thought experimenting  and a little bit of science fiction,  but it certainly begs the question, what is the optimal rate of growth? When should we stop growing? What is the  limits of growth? Some people argue we are now at the limits of growth.

 Other people say, no,  we can still grow 200, 300 years before we are at the limits of growth, looking at resources and  the physics of economic growth, right? So I’m saying in my paper, we need more research and more thinking  about these type of issues  and link this up specifically  to the potential role  that artificial intelligence  can play in overcoming boundaries  and playing into  the existential threats  that we may or may not face.

 Sian, what you’ve been saying  is really interesting because it really highlights  how there is a trade-off or there is negative and positive aspects about letting the super  intelligence run free or containing it. But one question that raises my mind is if we were to  decide that it’s better to contain the artificial super intelligence, how could we possibly do that?  Imagine the Singleton has developed or is about to develop.

 How could programmers, how could policymakers ensure that they would not lose control of  the situation?  How would the controlling come about?  I don’t think that they can. I think the only way is to think of a typical pattern  of trajectories of technology development over time,  which tend to be incremental.

 And this incremental process, one can monitor it  and apply the brakes if things seem to go out of control.  And I think we are not, you know, in terms of AGI  there yet. But I also think ultimately, as some, you know, philosophers in the transhumanist,  post-humanist areas have been speculating, it may very well be the case that it’s never a  situation of artificial intelligence versus humanity, that perhaps we will go up into  the AGIs that we eventually create into the AGI’s that we  eventually create and the super intelligence that we create. Now we’re already augmenting

 our human bodies by artificial teeth, by artificial pumps in our heart, by artificial arteries,  by artificial limbs. It will probably not be unrealistic to expect these patterns that  sometime in the future we will also be augmenting the human brain.  And we may even be able, as Robin Hanson’s speculation is book M, at some stage in two  or 300 years have brain emulations that we will be copying or uploading our brains and consciousness  in a way, although we don’t know what consciousness yet is, probably into a digital format.

 So it may very well be the case that the next phase of evolution of humans  would be that we become part of our technology as we already are,  but more fully and that we become part of the AGI.  So it’s not that there is the AGI and there are humans,  that this will be us in the future.  That’s one scenario at least.

 Oh, that transhumanist note at the end, I think,  is a very good optimistic  touch, or pessimistic, depending  on your point of view,  to end up our podcast  because we are kind of running out of time.  But we’ve talked about the  asymmetries that can lead in power from  the rise in artificial intelligence,  the problems, the economic problems that humans  face in innovation and depopulation and how ai can solve it and then lastly if it  is desirable at all for artificial intelligence to overtake us and how would that transition come

 about um it’s been a pleasure for you talking about to talk with this with us today and tell  us where can we find some more of your research and where can we find more about your work?  Yes, thank you.  It was nice talking to you as well.  It’s nice to speculate about these future scenarios  because as Ursula Gwynne said,  the future is a sterile space where we can experiment  and we can draw some lessons  and we can take from these thought experiments  perhaps some pointers to where we should try to improve,

 you know, our society and our reality as it is now. So I’ve been trying to do this. And I’ve  been writing a couple of papers on the issues of technology and AI. Most of what we’ve talked about  today, if people are interested to read about it, they can read about it in a paper that’s  been published at the end of last year.

 It’s available on the Institute for the Study of Labor, their website.  It’s called IDSA, I-Z-A.  So if you Google IDSA or I-Z-A, you will find the paper is freely downloadable.  people can also go to my personal website you know vimnode.

com where i also have links to various papers that i’ve written on ai in the last two or three years i can also recommend that  listeners if they’re interested in this go to the oecd’s ai policy observ, which includes what they call the AI Wonk blog. I’ve also added a couple of  blog pieces on AI, and there are various interviews and links to work that I’ve done  in the context of the OECD’s AI Policy Observatory on AI.

 And there’s also other  interesting resources for if you’re interested in AI policy and AI governance.  I think we all are around here.  The talk has been interesting, for lack of a better word.  I’ve enjoyed it thoroughly.  And personally, my favorite touch was how you hinted at transhumanism at the end and  what that might mean.

 Charlie, what was your favorite part?  Well, putting me on the spot here, Javier. I think the different approaches to alignment,  I think I found most interesting. And also the transhumanism is also very interesting stuff.  So yeah. Well, it was great having you Wim. I hope you come with us again.

 It has been really  interesting and I think we can safely say we can all learn. We have all learned a lot.  And whether you see this problem as something to look forward to or to be concerned with,  I think it is really interesting for everyone and there’s no doubting that.  Thank you.

 

(2) “Useless People” – Who is Yuval Harari? Klaus Schwab’s Right-Hand Man – YouTube

 

Transcript:

 So recently, Klaus Schwab announced that he’s going to be stepping out from the World Economic Forum  January of 2025, and there’s a leading candidate that may replace him,  whose books have sold 45 million copies in 65 different languages.  His name is Yuval Harari. He’s been called the brain of Klaus Schwab.

 His books have been recommended by Bill Gates, Obama, and many others.  This is a guy that’s known by many of the world leaders, prime ministers.  He wrote a book called Sapiens, sold over 21 million copies.  Homo Deo, sold over 9 million copies.  21 Lessons for the 21st Century, sold over 5 million copies.

 He’s someone you ought to know,  because what we learned in the last few years  is the amount of power Klaus Schwab has with Davos.  If this guy takes over, we have to find out  why this gay man, born in Israel in 1976,  who received a PhD in history from the University of Oxford  in 2002, he could potentially have that much power?  We’re going to talk about him today.

 So let’s go through him.  Very unique guy.  Definitely different than everybody else.  He’s been openly gay and married to his husband, Itzig Yehav, in Toronto, 2010. They co-founded Sapienship, a social impact company. He practices Vipassana meditation since 2000, which, by the way, I may have butchered the way you pronounce it, but he meditates he owns a smartphone only for emergencies and travel.

 Few things about him, public speaking, delivered keynote speeches at World Economic Forum at Davos,  gave him a million dollars during pandemic for donation, he and his husband.  He’s criticized Israel Prime Minister Benjamin Netanyahu, particularly opposing judicial reform plans.  He’s done a bunch of different things.

 So he did an interview with The Guardian, a Q&A format.  I want to go through a couple of them that I think it’s important to cover. What is your greatest fear that we will  destroy our humanity without even realizing that we’ve lost? Okay, interesting. Which living person  do you most admire and why? On a personal level, the friend who is single mom raising two kids by  herself during this COVID era. She’s a real hero.

 If a historical personality is what we’re talking  about, I choose Mikhail Gorbachev,  who probably saved the world from World War III. What is your most treasured possession? My body.  What would your superpower be? Being able to observe things as they really are, okay? What did  you want to be when you were growing up? Very interesting answer. Loved.

 What is your guiltiest  pleasure? I don’t feel guilty about pleasure. When’s the last time you changed your mind about something significant?  This year with COVID.  I’m a big believer in the need for global cooperation on major problems,  and watching the world over the past year made me realize it’s going to be much,  much harder than I thought and maybe even impossible.

 Tell us a secret.  The people who run the world don’t understand it.  He’s around the people who run the world don’t understand it. He’s around the people who run the world.  So he’s saying the people who he’s around don’t understand how the world runs.  Interesting.  In another interview, he’s asked what comes after meaning what comes after life.

 He says, I’m not sure I haven’t managed to go much beyond what was in Homo Deus, in which  Harari discussed the emerging religion of data is and data doesn’t reclaim that the  universe consists of data flows and value of any phenomenon or entity is determined by its contribution to data processing.  He’s talked about the rise of the useless class, us.

 Harari discusses the impact of AI on a workforce, suggesting that AI and automation will render many jobs obsolete.  He’s not alone there. Other people believe it as well.  He introduces the concept of a useless class, referring to a large segment of the population  that could become unemployed due to technological advancement.

 He emphasized the need to develop new economic, social, and educational systems  to support those affected by these changes  as traditional job structures are transformed.  We got to learn a little bit about his father and the relationship,  which is kind of a little bit gray.  We don’t know much about it.

 From the age of eight, Harari attended a school for bright students,  two bus rides away from his family’s house in Kiryat Atta. Yuval’s father, who died  in 2010, was born on a kibbutz and maintained a lifelong skepticism about socialism. His work as  a state-employed armaments engineer was classified by the standards of the town.

 The Harari household  was bourgeoisie and bookish. They were rich. They were known as having money and being successful.  He built out of wood blocks and formica tiles a huge map of Europe on which he played war games of his own  invention. Harari told this individual that interviewed him that during his adolescence,  against the backdrop of his first infida, he went through a period when he was a kind of a  stereotypical right-wing nationalist.

 He recalled his mindset, Israel as a nation is the most  important thing in the world and obviously  we are right about everything and the whole world doesn’t understand us and hates us,  so we have to be strong and defend ourselves.  He laughed, you know the usual stuff.  This next part is a little bit interesting, so brace for impact.  One reason he chose to study outside of Israel was to start a new life as a gay man.

 On weekends he went to London nightclubs.  I think I tried, he says this, ecstasy a few times, and he made dates online. He set himself the target of having  sex with at least one new partner a week. Very aggressive, competitive. To make up for  lost time and also understand how it works because I was very shy, he laughed.

 Very strong  discipline. He treated each encounter as a credit and a ledger. So if one week I had  two, and next week there were zero, I’m okay. Like an accountant, CPA. Very interesting.  He says here, within the next century or two, we humans week we were zero, I’m okay. Like an accountant, CPA, very interesting.

 He says here,  within the next century or two, we humans are likely to upgrade ourselves into gods. You could be a god and change the most basic principles of the evolution of life. He says, I titled the book  Homo Deus because we really are becoming gods. In the most literal sense possible, we are acquiring  abilities that have always been thought to be divine abilities, in particular, the ability to create life, and we can do with that whatever we want.

 The main products of the future economy  will not be food, textiles, and vehicles,  but rather the big product of the 21st century  are going to be bodies and minds.  So where’s it going with this?  Here’s what it says,  Harari’s vision for the future, dictatorship.  One of the dangers of 24th century  is that machine learning and artificial intelligence  will make centralized systems much more efficient  than distributed systems,  and dictatorships might become more efficient than democracies.

 The revolution in information technology  will make dictatorships more efficient than democracies.  Dictatorships might become more efficient than democracies.  Do you want a dictatorship?  He’s saying it may be more efficient.  If you don’t want them,  this guy may be the leader of the World Economic Forum.

 This is what he’s going to be talking about  when he’s selling books.  They sell.  When he writes books,  tens of millions of copies get sold.  You need to kind of know what these guys have in mind  for vision for you.  I mean, listen, if you’re inspired already,  stick around.  You’re going to be motivated even more.

 Here’s one for you. I strongly believe that given the technologies we are now developing  within a century or two, at most our species, you will disappear. You’re going to disappear.  I don’t think that in the end of the 22nd century, the earth will still be dominated by homo sapiens.  Here’s why he thinks there’s a big challenge with the useless class based on how he sees it. Watch this. The useless class possesses a threat to community peace primarily because they will lack both a means of income.

 How are you going to make money?  Everything is being done better than you and AI.  Any sense of purpose.  If you’re not making money, you’re not doing a job, what is the purpose of life?  Why do I exit?  What can I do?  All the other machines are doing a better job than me.  That’s kind of what he’s talking about.

 The emergence of two distinct classes, these employed  in the AI-driven roles and unemployed, the useless class, will exasperate social inequalities and  could lead to discontent and unrest. Even if governments provide financial support to the  unemployed, the psychological effects of lacking purpose need consideration, such as how to address  their potential boredom and dissatisfaction. Harari highlights a fundamental educational challenge.

 The uncertainty of future job markets makes it difficult to determine what skills should be taught in schools today.  It’s actually a good point.  What should we be teaching in schools today?  While AI revolution might create new jobs, these are likely to be high-skilled positions,  leaving those previously in lower-skilled jobs unable to transition easily.

 Drugs and computer games.  You ready for this? The biggest question, what to do with all these useless people? Okay. Now we see  the creation of a new massive class of useless people. The problem is boredom and what to do  with them when they are worthless. My best guess is a combination of drugs and computer games.

 Really? Drugs and computer games?  And the whole UBI thing.  Remember when Andrew Yang was talking about UBI?  Here’s what he talks about UBI.  UBI involves taxing large corporations and the wealthy to fund regular, unconditional payments to all citizens, aiming to provide a safety net for everyone.  So let me tell you how I process this.

 I process it a couple different ways. One, I’m a man of faith. I think the future looks bright,  period. I’m optimistic. I’m team human. I think you and I are wired in a way to consistently look  for new solutions and way to rewire ourself. And he’s not for somebody that says, recreate yourself  my ass. That’s his line. He doesn’t believe you can recreate yourself.

 If your job as a truck  driver is replaced by a machine that can do your job, etc., etc.,  how are you going to go replace yourself is what he’s saying.  Now, the point is right if you’re not proactive way before that comes.  So just like jobs, if right now your kid’s about to go to college and you say, son, you  should go get a degree on being a great typewriter.

 Go be a great type, a person that can type 80 words per minute.  What value does that have today?  You may say, well, I want you to go start a newspaper company.  A newspaper company, you wouldn’t ask your kid to do that today, but maybe 80 years ago,  different story.  A hundred years ago, working for the radio?  Are you kidding me today?  What are you going to do working for radio?  You want to go work for radio today?  It’s not the same as it was before, right?  So you need to adjust on career planning for yourself, your company,

 your kids, your family. These are the types of conversations you need to be having. But just so  you know, if you are going to live your day-to-day life and not pay attention to who is coming up,  to having power of making decisions that may dictate the way you live, and you’re not paying attention to this, then you deserve what they end up doing to you.

 It is our job to be proactive and study these folks  and see what they’re imposing,  because many tens of millions of people  listen to a guy like this and say,  this guy went to Oxford.  He must know what he’s talking about.  You know what?  Dictatorship is better than democracy.  I think that’s what we should do.

 Yeah, you know what?  We should tax all these big companies. Forget these rich people. They’re useless idiots anyways.  They’re worthless. Yeah, I’m with this guy. If you don’t have a counter argument for this and he’s  got that influence, he’s eventually going to have influence over you, your kids, your family,  and your community. So you got to be aware of what’s going on because only the paranoid survive.

 If you got value out of this video, give it a thumbs up, subscribe to the channel. And if you’ve  not seen his boss, because we go a little bit deeper into what the World Economic Forum is,  you may want to watch the video we did on Klaus Schwab.  If you’ve not seen it, click here to watch it.  Take care, everybody. Bye-bye, bye-bye.