Jen introduced herself via a social networking website by asking if I had any advice about getting into journalism. Boy, did I. She was pretty, about the same age as me and lived in my home town in Canada.
We messaged back and forth. Soon, she asked me if I'd like to catch a baseball game with her. Wow. An attractive girl with the same interests and career aspirations - how lucky could a guy be?
Still, it was the internet, so I asked Jen for more details about herself. She sent me a link. I clicked and was taken to a page that asked me to input my personal information, including credit card details. The game was up.
Jen was a chatbot, programmed to scour social network profiles for personal information then initiate conversations with the intention of suckering people into divulging their financial details. By poking around online, I discovered she went by many different names, but always used the same conversation strings, filling in the blanks with details such as her marks' professions. The bot had fooled dozens of men, as far as I could tell. I'm sure a handful had entered their credit card numbers, which doubtless led to them getting fleeced. By a machine, no less.
Criminal chatbots have become quite a menace on the internet. They lurk in social networks, messaging apps and webmail, and in some chatrooms they can outnumber humans by more than two to one. Many of these tricksters are designed to build relationships with their marks before soliciting cash or attempting identity theft, whereas others simply try to lure people into clicking on a link that leads to malware. Their abundance and success is forcing researchers and companies to seek out ever-smarter ways to catch them. It's not exactly what the pioneers of artificial intelligence had in mind. We have been watching and waiting for the moment when machines become smart enough to pass as humans - but it seems to have already happened right under our noses.
continued at site.
Get involved -Take away the standing of corporations MovetoAmmend.org