At 7:30 AM, as I emerged from the Tube at King’s Cross station, I spotted the bright purple signs pointing to CogX. It’s the biggest AI conference in the UK, perhaps in the world, spread across a gargantuan site in King’s Cross just outside the railway station.
As well as being London Tech Week, this has been a good week for the umbrella business as the rain was relentless. As a result the atmosphere on Monday morning was one of mild disgruntlement, but the show went on.
London mayor Sadiq Khan gave the opening address. He celebrated innovation in London but warned about the importance of an ethical approach when deploying AI technology, and he called on the next Prime Minister to “give artificial intelligence the attention it deserves”.
Khan’s advice to anybody thinking about starting a technology business is “think of London”. Personally I think Manchester’s just as good, but let’s not quibble. Referring to the Digital Talent Programme, Khan stressed the importance of diversity in technology hiring.
It wouldn’t be a tech conference without an expo. There were 26 companies exhibiting this year, including:
- Peltarion, a platform that aims to make machine learning easy. They also wrote a short book, The Essential AI Handbook for Leaders.
- Cloud Factory who provide a cloud workforce for data labelling (I asked if they had anything to do with the rain, but it seems those are a different sort of cloud).
- Kortical have a platform to assist data scientists to build better machine learning models. They’ve worked with the NHS, who were also exhibiting.
- NHS X works for the National Health Service to drive forward digital transformation of healthcare.
- Office for Artificial Intelligence is a UK government organisation. Its mission is to drive responsible and innovative uptake of AI technologies for the benefit of everyone in the UK.
Building the future AI hardware
There are 10 main stages at CogX each with their own programme of talk on topics ranging from ethics to health care, the future of work, and the latest AI research. I spent Monday afternoon in the Cutting Edge stage for taks on the Future of AI Hardware.
I love learning about hardware. I don’t know this world nearly as well as software so this series of talks were absolutely fascinating. Usually programmers don’t worry about hardware but the world of AI is one of the few exceptions. Whether it’s wearable technology, smart cameras, self-driving vehicles or smart cities the combination of cutting-edge software and hardware is what makes the difference.
James Wang from ARK Invest spoke about how novel chip technology is key to advancing AI. Nigel Toon spoke about Graphcore’s Intelligence Processing Unit. These are special-purpose processors designed for executing machine learning models. Finally Mike Henry from Mythic described another specialised processor, but this time it’s an analogue processor. Why analogue? Mythic’s design allows the parameters of a neural network to be stored in a denser encoding than would be possible for a digital processor, so it can support big models in a small amount of space.
It got more and more exotic from there, covering photonic processors and quantum computing.
Mercifully the rain had stopped on Tuesday. It was even briefly sunny! Having exhausted the expo I spent some time talking to the many startups next door in the Startup Village, before heading to the Alan Turing stage to hear Stuart Russell’s keynote.
Stuart Russell is a professor of Computer Science at UC Berkeley and co-author of one of the best-loved textbooks on AI: Artificial Intelligence: A Modern Approach.
Russell began by asking *what if we succeed? *i.e. what if we make super-intelligent machines? An analogous question is what if aliens sent humanity an email saying they’ll be here in 30 years? how would we prepare for that?
He discussed how probabilistic approaches to decision-making can overcome a lot of classical control problems in AI. These topics are covered in detail in Russell’s book Human Compatible: AI the Problem of Control.
Every seat was filled for this talk as Christine Payne demonstrated incredible AI-generated musical compositions (you can listen to an example of MuseNet improvising Chopin) and Julian Nolan discussed how AI can help come up with new inventions.
AI and the Global Goals
The final talk of the day was a panel discussion on the role of AI in the United Nations global sustainable development goals. These 17 goals were set in 2015 for 2030.
The 2030 global goals
The panel was made up of
- Richard Curtis, screenwriter and director
- Jacquelline Fuller, VP at Google and president of Google.org
- Claire Melamed, executive director of Data4SDGs
- Kamal Ahmed, editorial director at the BBC
- Kriti Sharma, founder of AI for Good. Kriti is a friend of ours and we strongly urge everybody to take a look at her organisation and the work she’s doing to promote ethical applications for AI.
One of the key takeaways for me was that while these goals are very ambitious, individuals can start small and local; solutions that work in your own community are likely to have wider applications. Gathering high-quality data is essential, so much that Claire Melamed has built a company around just that, Data for Sustainable Development Goals.
The final day of the conference, but actually just half a day, because three full days of talks would be too much to take in.
AI opens up entirely new attack vectors. Imagine an AI that can pretend to be you by copying your writing style or even faking your voice in order to perform social engineering attacks. This and many other terrifying ideas were the topic of the State of Cyber Threats session with
- Robert Hercock, chief research scientist at BT
- Dave Palmer, director of technology at Darktrace
- Grace Cassy, co-founder of CyLon
This was a fantastic keynote from professor David Lane, from the Edingburgh Centre for Robotics. There’s a short video on the work that the centre does here.
Links to talks
Some of the talks mentioned in this article are available on YouTube (these are still being uploaded at the time of writing).