Tech giant Samsung came under fire this week when it was discovered that their Smart TVs could be listening in on private conversations and sharing the information with third parties. Brent speaks with Bart Selman, AI researcher and professor of computer science at Cornell University, about the ethical issues that arise as technology gets smarter.
How "smart" is this smart television that is listening to my conversations?
I think at this point it's quite limited still. It's basically listening for key words. But speech recognition technology has advanced a lot in the last few years, actually. The machine can understand a lot more, and I think what Samsung will do is they will add more and more capabilities to the understanding.
So how many years before smart TVs gets really smart?
I think about five to ten years. In five years, you'll be able to ask, 'I'd like a show of the type Breaking Bad, but I'd like it to have a more political angle,' and then you find something. There's always the question in artificial intelligence, 'To what extent does the machine really understand you?' And right now the understanding is not deep, but that will rapidly improve. Machines will start to learn about our world. That's what's really happening with all the data they're collecting — they are learning about the human world and how we perceive the world, and how we interact with the world. And that's going to make them much more like us and much more able to interact with us intelligently.
A man watches a presentation of Samsung's voice-recognition SmartTV technology. (Thomas Peter/Reuters)
But would the TV take the information it wasn't intended to overhear, and then offer me information in return?
Oh definitely. This is actually a very interesting issue. Manufacturers and people that build these devices will be very careful about that, because they don't want to freak people out, basically. It's amazing what Google already knows about you, from your Gmail and from your searches. It's one of the things that companies balance. It could actually make recommendations for you. There's the famous incident at, I think it was at Target, where the machine learning algorithm had discovered that somebody was pregnant, even before they knew it. They're going to have to balance that. But these things are definitely possible, and the big change in intelligent devices is this ability to hear and see. They start getting capabilities that a human has that would interact with us. And that contributes to more data which is not owned by us, but is owned and sampled by large corporations.
How has artificial intelligence research been affected by companies like Samsung and Google and others who are collecting information and investing in the field and trying to make technology more intelligent as they go?
The change has been enormous in the last two or three years, and it's still ongoing. I characterize it almost like an arms race between the companies. The key feature is big data — big data and certain techniques developed in AI. You look at Facebook, Google, Microsoft, IBM, Samsung — they feel that the first [company to make a machine] who can truly understand natural language, who can truly understand what people want — that company will the the big winner, because their devices will be able to do better searches, better assistance. If you have an assistant that works really well and knows you really well, that will be a big advantage.
So when students are entering the field now, what kind of ethical conflicts are they facing that AI researchers didn't worry about 10 years ago?
Ten years ago, it was largely an academic enterprise. We had limited data sets. We definitely didn't have private data of people. Now students after they graduate, go to companies like Google and Facebook, suddenly they have vast amounts of data, and fairly private data. So that's one clear issue that we're going to have to deal with. On another level, we're building more intelligent systems. These systems are going to make decisions for us, even medical diagnosis systems. There you have again decisions of balancing interests of the patients versus the costs versus bigger issue of the safety of the population. The more AI techniques get involved in our lives, and the stronger their capabilities are, the more we start touching on these sorts of ethical issues. It's still very much in flux. There are very few guidelines now.
The life size humanoid robot 'RoboThespian' is 'a fully programmable interactive humanoid robot designed to inspire, communicate, interact and entertain.' (John Macdougall/AFP/Getty)
Do you have any examples of applications of AI, or the ethical concerns of working with that data that could have troubling consequences.
A lot of decisions, for example financial decisions, are being automated more and more. Loan applications, credit applications. It's only recently that we're seeing the legal profession getting somewhat interested in it because they realize that in some sense these machines are sort of a black box. They make decisions, but you don't quite know based on what.
In the U.S. and I guess many countries have laws against discrimination. You're not allowed to take certain factors into account. But if you're dealing with a machine learning approach, the machine may actually make decisions using that kind of data, inferred, not explicitly. From your postal code, from your background, infer something about you. You can have things seep into the system, when they're not supposed to. The legal profession is slowly becoming aware, 'Wait a minute, if we transfer decision-making abilities to machines, which we're already starting to do, how can we ensure that decisions are made fair and according to legal standards?' That's one example that I think is part of a current discussion. In the financial world you already see these systems that make decisions on a timescale much faster than any human. The earliest example, or warning system almost, of what can go wrong, a few years ago, we had a big flash crash at Wall Street. The Dow Jones Index dropped about 1,000 points for about 20 minutes. It came up in about 20 minutes, but nobody understood fully what happened. You can't say, 'Okay the humans should just watch over the system,' because the time scales are milliseconds to microseconds, the decisions are made. But at least it's given people a warning that something could go wrong. One thing AI researchers are considering is, 'Well we may have to design software that watches over other systems.'
Then who will watch the program watching the program?
Well, indeed. That's going to be tricky.
How confident are you that next generation of artificial intelligence for consumer use will do what we want them to do?
When it comes to self-driving cars or household robots — in the physical domain, I actually think things will be quite safe. I'm more worried about the consequences of companies having all your data. What AI will bring is the fact that you can understand the data. It's one thing to collect hundreds of millions of hours of video, that's not so much a problem when you can't really search it. It does become a problem when the data becomes searchable and understandable by machines, then I think the risk is much bigger.
Anda sedang membaca artikel tentang
Rise of the machines: Q&A on smart devices and ethics
Dengan url
http://belajarbisnismen.blogspot.com/2015/02/rise-of-machines-q-on-smart-devices-and.html
Anda boleh menyebar luaskannya atau mengcopy paste-nya
Rise of the machines: Q&A on smart devices and ethics
namun jangan lupa untuk meletakkan link
Rise of the machines: Q&A on smart devices and ethics
sebagai sumbernya
0 komentar:
Posting Komentar