Copy

Dear friends of CSER,

It’s been a busy six months since our last newsletter, and momentum is building. As our activity level increases, these newsletters will increase in frequency to once a quarter. For more regular news updates and more about our research, please see our redesigned website: http://cser.org. Thank you for your support!

HP, MJR, JT & SOH

Upcoming Event: “Existential Risk: Surviving the 21st Century”.

On February 26th, Huw Price, Martin Rees and Jaan Tallinn will speak at Cambridge’s Lady Mitchell Hall about the challenge of technological risk. More

Advisory Board

We are delighted to announce the addition of two world-leading academics to our advisory board: ethicist Peter Singer and artificial intelligence expert Stuart Russell. More

CSER and policy

CSER’s founders have had a busy 6 months. Highlights include:

  • Martin Rees’s involvement in the World Economic Forum’s risk sessions in Davos, drawing attention to catastrophic risks.
  • Huw Price’s workshop on existential risk, policy and the public in Arizona in February. More.
  • We have also had fruitful meetings with key policymakers from a number of governments, and will be announcing plans for high-level workshops on technological risk and policy later in the year.

Public engagements

CSER’s founders have given and written a number of high-profile talks and articles on existential risks and emerging technological challenges. Recent highlights:

  • Martin Rees was Star Speaker at the British Science Festival in September, giving a talk on Science, Environment and the Future. More.
     
  • In October, Martin Rees was interviewed by the Financial Times:
    “It seems to me that our political masters should worry far more about events that could arise as unexpectedly as the 2008 financial crisis but which could cause worldwide disruption” .
    In January, he spoke to the New Statesman about the challenges of the coming century:
    “Advances in technology will render us vulnerable in new ways”
     
  • In October, Jaan Tallinn spoke to the Harvard High Impact Philanthropy Society about the importance of working to reduce existential risk.
     
  • CSER’s Academic Project Manager Seán Ó hÉigeartaigh will speak about various challenges associated with existential tisk at TEDx Hasselt, the Oxford Transhumanist and Emerging Technologies Society, the Dutch Future Society and Dublin’s Pint of Science Festival.

Funding

Although our highly ranked "New Science of Existential Risk" ERC grant was not selected in the final round, we have several promising new developments, including additional sponsors. More

Essay Competition

FQXi, directed by CSER advisor Max Tegmark, is running an essay prize competition titled “How Should Humanity Steer the Future?”. Part-sponsored by Jaan Tallinn, the competition has a generous $40,000 prize fund for the top 18 entries. More

Preliminary plans for a workshop with MIRI

In February, Huw Price visited the Machine Intelligence Research Institute for research discussions on topics including artificial intelligence risk. Preliminary plans are being developed for a joint workshop on decision theory later in the year.
 

 

CSER and existential risk in the news

The Centre has been discussed widely in the media; for one example see: ”Cambridge research project will assess threats to human existence“  in the Financial Times:
“The aim is “to compile a more complete register of ‘existential’ risks and assess how to enhance resilience against the more credible ones”.

Concerns about risks from artificial intelligence are also being discussed more rigorously in the press.
  • In the Huffington Post:
    “A handful of DeepMind funders and founders -- including co-founders Legg and Demis Hassabis, and backers Jaan Tallinn and Peter Thiel -- have consistently worked to raise awareness about the potential risks of uncontrolled AI development”More
     
  • In the Washington Post: 
    “I’m talking about the risks posed by “runaway” artificial intelligence (AI). What happens when we share the planet with self-aware, self-improving machines that evolve beyond our ability to control or understand?”More
     
  • Seán Ó hÉigeartaigh was recently interviewed by RealClearTechnology in a lengthy piece on AI risk: 
    “Designing the goals and rules of [] algorithms, such that unforeseen catastrophic consequences cannot occur, turns out to be extremely difficult.”
Share
Tweet
Forward to Friend
+1
Share
Copyright © 2014 The Future of Humanity Institute, All rights reserved.


unsubscribe from this list    update subscription preferences 

Email Marketing Powered by Mailchimp