The easiest way (I found) to setup TensorFlow on your system is by using a Docker install. Its advantages are that it just works out of the box, all dependencies are there and you can start building deep models within minutes! Warning: it does not allow you to accelerate the learning on a GPU though!!
AI is not a new concept. It was born in the summer of 1956, when a group of pioneers came together with a dream to build machines as intelligent as humans. AI encompasses disciplines such as machine learning, which can find patterns in data and learn to predict phenomena, as well as computer vision, speech processing and robotics.
The main technique behind the current hype around deep learning is artificial neural networks. Inspired by models of the brain, these mathematical systems work by mapping inputs to a set of outputs based on features of the thing being examined. In computer vision, for example, a feature is a pattern of pixels that provides information about an object.
Most commonly, the supervised learning approach requires the computer to “learn” these associations by training on big data sets labelled by humans. What began with classifying cat videos has now extended to applications such as driving autonomous vehicles.
Step 2: Identify where AI thrives
With this knowledge, we can start to understand where AI is optimally positioned to take over. Have a look around you and take note of tasks that require huge amounts of data processing.
For example, no human would or could look through everyone’s click patterns on Google to figure out what someone wants.
Even the more advanced capabilities that AI has demonstrated in winning AlphaGo, video games and, most recently, poker rely on training on thousands and thousands of trials.
Essentially, AI is particularly good at any task that requires an enormous amount of repetitive processing. If this sounds like your job, it might be time to start thinking of a survival plan.
To evaluate your “automation risk”, type in your job on this site to find out what researchers have calculated for your field. Even if you’re not worried, have a look. The prepared person stays ahead.
Step 3: Devise an action plan
You now have two choices:
Option A: Resistance
Your first option is to fight back. This may be your natural reaction and, as in during the industrial revolution, you would not be alone in wanting to oppose the change.
The nature of the human race is that we will always strive towards the next advancement. Resisting change out of fear of its disadvantages may work in the short term but will only make you more likely to be left behind in the future.
Option B: Make friends with AI
The far superior strategy is to form a treaty. Accept that AI will increasingly become a part of society and look for possibilities to collaborate. There is a huge potential for AI to assist in places where humans fall short, precisely because of the processing power.
As with every big change, there are fears about new technology like AI. Ultimately, the way to survive the AI revolution is to embrace the partnership. Understand the potential that AI has to improve the world around you and look for those opportunities to implement positive change.
If you prepare yourself, you may find the AI revolution allows you not only to survive but to be an even better version of your human self.
About The Conversation:
The Conversation is an independent, not-for-profit media outlet that uses content sourced from the academic and research community.
I had the joy of reading Malachy Eaton’s “Evolutionary Humanoid Robotics” (Springer, 2015, ISBN 978-3-662-44598-3). It sheds light at the intersection of evolutionary search and robotics, with a special focus on humanoid or human-like robots. It is a skill to hit the right spot between introducing newcomers to a concept while also informing researchers already in the field. Eaton manages to do just that by delivering a nice flowing, quick to read book (with its 151 pages).
This review is also published in the Springer “Genetic Programming and Evolvable Machines” Journal.
The reader is given an introduction of the relevant principles and ideas in evolutionary computing and its applications in robotics. I can recommend this book to starting graduate students in the fields of computer science and/or robotics, or researchers looking to get started with evolutionary robotics. With its focus on humanoid robots it might seem like a niche book but the discussion about current evolutionary approaches and their limitations has wider implications.
Eaton asks questions that are interesting for the whole area of robotics not just humanoid robots. Such as how can we close the reality gap? I.e., reduce the difference between simulated robots and real robots. And how can we create, i.e. program, learn or evolve, interesting behaviours for complex robots? And, a question also very important in my research, how can we perform evolution and/or learning on real robots safely? As a researcher in evolutionary humanoid robotics (EHR) I particularly liked these questions and how EHR can make a difference. The book also discusses the different approaches for evolving robotic “brains”. I.e. evolving decision making or control systems for a robot, in contrast to evolving the robot’s body. It also explains different approaches for the evolutionary process when operating in simulation and when dealing with real hardware. For people that are starting in the field Chapters 5 and 6 provide a really nice overview of the foundations of the research in evolutionary humanoid robotics from 1990-2000 (what Eaton refers to as “prehistory”) and the state of the art from 2000-2014. I particularly liked reading the comprehensive tables detailing the published research over the last decade in both simulated and real-world environments.
The book’s gaze is on humanoid robotics and Eaton does a good job at defining the terminology. Such as, describing the various graduations of distinction between evolutionary robotics and evolutionary humanoid robotics. Thus leading the reader to a more fundamental understanding of the levels of humanoid robots. He also discuses ethics in robotics research a few times throughout the book and even dedicates Chapter 8 to philosophical and moral considerations. I think Chapter 8 is relevant to a much broader audience than just colleagues in the EHR field.
My only criticism would be that Eaton missed an opportunity by not provoking a discussion on what currently are the big challenges in EHR and what will they be in future. I would have liked to see him trying to answer the question, what is the next challenge after evolving bipedal walking? For example, important issues such as, the benchmarking of algorithms (beyond the current approach of just creating robotic competitions), the definition of interesting behaviours and the long-term deployment of autonomous robots are only hinted at but no opinion on how to tackle these are provided. To be fair these are fundamental questions for all of robotics, not just evolutionary humanoid robotics.
To conclude, “Evolutionary Humanoid Robotics” is a well-written, thoroughly researched introduction to what might seem like a very specialist topic but shows that it has wider implications. It lists the research outcomes over the last two decades and hints at the how these might continue in the next few years. On top of that it is a wonderful guide for finding relevant references in the field. I highly recommend it for young researchers starting out in the field of EHR, as well as, people interested in the general idea of evolving robotic systems, both for humanoids and non-humanoid robots.
QUT was host to Robotronica again – and it was amazing!
What a robotics fest it was! The best part was seeing the excitement in the kids eyes when interacting with our robots. It is always great to talk to the public about your research and being able to inspire kids!
If you missed it, here are videos, photos and media coverage of our Naos, Baxter and mobile platforms:
Robots, machines and a real-life cyborg have given curious humans a glimpse of what the world will look like a decade from now.
Thousands flocked to Brisbane’s Queensland University of Technology (QUT) Gardens Point campus for a day of fun, education and discovery at Robotronica.
Adults and children flocked to an event celebrating advances in robotics
Robotronica was held at the Queensland University of Technology
The first government recognised cyborg was a guest at the exciting event
The world’s fastest drummer, who has a robotic arm, also played for guests
(I had the chance to meet Neil Harbisson for a photoshoot with Baxter, awesome guy!)
The robots have arrived, the future is here. The Queensland University of Technology is hosting its Robotronica spectacular today, a one-day free event offering Brisbanites the chance to connect with the latest innovations in robotic technology.
The robot-infested QUT campus is showcasing workshops, live demonstrations, performances and installations, as well as a robot petting zoo.
<– I was showing off our Kinova robotic arm to heaps of kids there, it was great!
In short, the future is bright!
Today is the anniversary of the first time humans stepped foot on the Moon. Another year has passed since Apollo 11 landed on the lunar surface in July 1969. What a feat of engineering it was to follow JFK’s bold call to a generation of engineers, to build the systems allowing humans to walk on the Moon. Yet this anniversary also means, yet another year has passed without human presence on the Moon. The last men left our celestial companion more than 32 years ago and with it our sporadic manned, lunar exploration ended.
Given some recent rumours though this hopefully will change in the next decade. There has been increased interest in building lunar bases by the major (and not so major) spacefaring nations. Continue reading →
The CVPR conference in Boston, one of the premier computer vision conferences, was all about convolutional neural network and deep learning. This new (or not so new) techniques seem to be doing everything from image classification to scene understanding. Although the vision community has not shown too much of an interest in robotic applications, I had a feeling that this seems to change (slowly at least).
tl;dr:CVPR is huge, lots of convolutional neural network, which is now the de-facto standard on how to tackle computer vision problems. CV research is getting more easily to reproduce thanks to open source code AND models. There is a trend to investigate more what is behind these networks and also a trend to look at more robotic (real-world) applications of vision. My longer write-up of #CVPR2015 is after the break. Others have done similar things: a great write-up Tomasz Malisiewicz, another one by Zoya Bylinskiilisting interesting CVPR 2015 papers.