August 9, 2017 by
Machines are getting better at thinking for themselves. Microsoft’s DeepCoder, for example, has learnt to write its own code. And Google has built Federated Learning — a tool that automatically personalizes apps directly on users’ devices. For marketers, this evolution in machine learning has enormous potential: almost all (98%) think it will benefit their marketing efforts. It’s easy to see why confidence is high – smart technology can help brands in multiple areas, from boosting targeting precision to improving product suggestions. But this enthusiasm is still tempered by apprehension. We live in very privacy-conscious times, with the new EU-US Privacy Shield and General Data Protection Regulation (GDPR) putting data usage in the spotlight, globally. And as the machines we create become more autonomous, many marketers are starting to ask the question: how do we ensure intelligent tech processes data in way that safeguards privacy and trust? The machine learning opportunity To start with, let’s take a look at what machine learning offers. First and foremost, there’s efficiency. Not only can machine learning analyze data on a scale that humans and existing programmatic tools can’t achieve, but it can learn how to action data without being programmed. Secondly, it can be applied for a myriad of purposes. For example, a brand may use machine learning to ensure the overall customer experience is seamless: connecting data across different departments and providing fast responses via tools like chat bots. Or, marketers might use it to optimize advertising impact by matching ads with individual needs and attributes, such as their location, instead of targeting broad audience segments. Machine learning can even be used to predict which techniques and creative will work best; assessing behavioral data over time to build tailored and relevant messages that are likely to be well received. This ensures marketers are maximizing engagement while minimizing wasted ad spend. But marketers will only benefit from AI-based technology if their data and processes are in optimal shape – they must guarantee compliance with privacy laws. After all, machine learning will only ever be as good as the data and programming powering the technology.   Keeping a hold on privacy Overcoming consumer privacy concerns is undoubtedly a key challenge for machine learning. Following a record-breakingnumber of data breaches in 2016, it’s not surprising that consumers are anxious about how companies use their data and digital privacy in general.   To maintain consumer trust and loyalty, marketers must ensure privacy is prioritized at every stage of data processing and machine learning is supervised. After all, machines that are able to program themselves – free from human guidance – are at risk of producing unpredictable outcomes, and unknown usage can put privacy at risk. Marketers therefore need to check that the input, procedures, and output of autonomous machines are scrutinized for accuracy and privacy protection, paying particular attention to the transparency of algorithms. They must also comply with new data privacy regulations. The EU-US Privacy Shield and the GDPR are transforming how businesses collate, process, and store European consumer data, which means they are both having a significant impact on international brands. Effectively a replacement of the Safe Harbor agreement, Privacy Shield is a framework that governs the transfer of EU citizens’ data to the US. The GDPR has a broader reach, applying to any business processing data that makes EU citizens personally identifiable. To meet the requirements of both, marketers will have to make all data processing transparent, including that of smart machines, especially if they want to avoid the GDPR’s costly fines. Taking control of data The most effective way to guarantee data stays safe is to place it where its creation, usage, and storage can be tightly controlled: a centralized hub. By unifying consumer data from disparate sources — including first-party customer data and third-party insight — marketers can analyze and manage insight before it is shared with smart machines. This will allow them to ensure that any risks to consumer privacy are mitigated and data output is accurate. And that’s not to mention producing a complete view of consumer journeys on all channels that will come in handy for enhancing personalization. Yet they must select the right tools if data is to truly be protected. To maintain a high level of security, systems should have the capacity to assess data from multiple data sets and filter it accordingly. And if systems are to meet transparency regulations, they need to allow instant data transfer and accessibility at all times, from every area of the organization. Only with a precise and privacy-assured view of consumer data can machine learning fulfill its potential as a tool that improves and redefines the customer experience. Machine learning in the future… Of course, machine learning is not restricted to marketing and in the next few years we are due to see it make waves in a wide range of sectors. For instance, computer scientists at MIT are experimenting with neural networks that can provide evidence for healthcare decisions and improve understanding of diagnoses. Clarity and privacy are due to remain center stage too: earlier this year LinkedIn founder, Reid Hoffman and eBay founder, Pierre Omidyar each invested $10 million in the Ethics and Governance of Artificial Intelligence (AI) Fund, which is intended to support research into the ethical issues of AI and how to solve them. There is a long road ahead for machine learning in its many guises — be it smart tech, AI or automated tech. To keep their creations under control, marketers need to balance security with creativity; centralize data management to make things simple, and adhere to data regulations, all the while never losing sight of the vast and exciting opportunities machine learning can promise. source

August 9, 2017 by
Walmart is about to use artificial intelligence in the worst way possible. According to a patent filing, the largest brick-and-mortar retailer in the world (likely looking for ways to compete with Amazon) is developing a technology that can identify whether customers are unhappy or frustrated. It will likely use existing security and checkout cameras to read the faces. As we all know, this is the year of machine learning and automation. Companies are starting to use the “AI first” mantra, and it’s transforming industries as we speak. Of course, all of those wonderful innovations will save us time and could even save lives (a car can swerve out of the way faster when a computer takes control than any young driver can). But they could also come at a high price — namely, in reducing our own privacy and in creating a dystopian society. Why do I think this will be an episode of Superstore soon? Imagine how it will work. Glenn, the store manager on the show (played by Kids in the Hall alumnus Mark McKinney, above right), is a little unhinged already. One day, a package arrives in his office from the corporate overlords — it’s a new webcam to install at the checkout lane. He’s pretty excited. The camera sends him a text every time the facial recognition detects a customer who looks sad. How could that go wrong? In real life, I could see Walmart employees appearing out of nowhere every time a teenager gets a text from his girlfriend or a dad running on fumes with four kids in tow has to buy diapers. It’s invasive, annoying, prone to errors, not that helpful, and a bit too much like Big Brother with a new toy. Facial recognition is a great idea when it comes to logging into a laptop or passing through airport security a bit faster; it’s annoying when it replaces actual human empathy. And yet — it’s also inevitable. When we can use facial recognition to assist in the sales process we will, even if it might seem heavy-handed or creepy. As one writer pointed out, the technology is one way to combat customer churn. One bad experience at Walmart means a customer might spend the rest of their days shopping at Costco instead. Also, this is a dogfight. Retailers are in a slump, so there’s no question they will look for ways to make sure customers are happy and never frustrated. I could see Walmart eventually using other automations to tell if you are only in a store for a short time or browsed only for trinkets. This is not something Amazon has to worry about online. All of the automations — like showing me books that match my interests — seem innocuous or even helpful. I’m being “scanned” just as much online and fed customized information. Yet in person, it seems like someone is watching me and pretending to know my intentions. That’s just not right. The worst part about scanning for “unhappy” customers is that AI is really terrible at seeing human emotions. Honestly, humans are really bad at seeing human emotions. How do we create an algorithm that knows the difference between a sleepy dad and a depressed dad? What level of software automation is required to identify a teen who just played Call of Duty for 10 hours straight versus a teen who just went through a breakup? I’m guessing this kind of AI is at least 20 years away if not even more — maybe 50 years. Before Walmart starts scanning us, it has two big hurdles. First, it needs to figure out how to scan that box of cereal correctly. Then, it needs to figure out the myriad of human emotions on display. Sadly, I don’t think any of that will stop Glenn from asking if we’re sad. source

August 5, 2017 by
In a move long-feared by privacy advocates, new rules have been adopted exempting the Next Generation Identification (NGI) system, the Federal Bureau of Investigation's database of biometric information on millions of American citizens, from Privacy Act safeguards. The database includes iris scans, photos, fingerprints and other information of individuals, much of which stems from sources totally unrelated to law enforcement. Even individuals Misidentification Problem: FBI's Facial Recognition Program Has Racial, Sex Bias       who have never been arrested or had issues with authorities could have biometric data stored in the NGI system, which even hoovers up data from background checks conducted on job applicants, welfare recipients, and licensed state teachers, realtors, and dentist. In all, the database holds around 52 million photographs, searchable through facial recognition, and accessible by 20,000 foreign, federal, state, and municipal-level law enforcement agencies. There are also few restrictions on what types of data can be submitted to the system, who can access the data, and how the data can be used. For example, while the FBI has promised it will not allow images from social networking sites to be saved to the system, there are no legal or codified restrictions in place of any kind to prevent exactly that. Privacy Act rules state any agencies with access to the database are legally required to inform individuals their data is stored on the system, but the FBI has sought an exception for some time, claiming that acknowledging it retains biometric records of individuals could affect investigations. "The NGI system also contains latent fingerprints, as well as other biometrics, and associated personal information that may be law enforcement or national security sensitive. Compliance could alert the subject of an authorized law enforcement activity about that particular activity and the interest of the FBI and/or other law enforcement agencies," the FBI said in a statement. The move has been long-feared by privacy advocacy and campaign groups. In July 2016, the Electronic Frontier Foundation (EFF), which filed an FOIA lawsuit in 2013 to obtain documents regarding the database, expressed its concern over the prospect. FBI's Biometric Database: Activists Counter Agency's Move to Evade Privacy Laws     The EFF noted the FBI had amassed the database in the first place with little congressional and public oversight, and failed for years to provide basic information about NGI as required by law, or a detailed description of the records and its policies for maintaining them. "The FBI has sidestepped the Privacy Act as it has expanded NGI, essentially saying ‘just trust us' with highly personal and private data — but the FBI hasn't proved itself worthy of the public's trust. Exemption will eliminate our rights to access our own records and take action against the government when it make mistakes with that data. The Privacy Act is only the barest of protection for Americans, but the FBI wants to escape from even that basic responsibility," said EFF Senior Staff Attorney Jennifer Lynch at the time. Moreover, the EFF stated the FBI refused to recognize the inaccuracy of the facial recognition tech, or publish any data on the system's accuracy rates. Given included images are typically "well-below" the recommended resolution of.75 megapixels necessary for accurate identification (newer iPhone cameras are capable of 8 megapixel resolution at least), it's perhaps unsurprising research has shown the innovation frequently misidentifies African Americans, ethnic minorities, women, and young people, and at higher rates than white citizens and males. As a result, errors within NGI will result in greater targeting of non-white US citizens — particularly given FBI databases include a disproportionate number of such individuals. The larger a facial recognition data set, the larger the scope for error — at 52 million images, the NGI effectively ensures many mistakes will be made. In 2014, the Electronic Privacy Information Center filed an FOIA lawsuit and obtained records that revealed the database had an error rate of up to 20 percent on facial recognition searches.  source

July 31, 2017 by
The influence and proliferation of extremist content, hate speech, and state-sponsored propaganda on the internet has risen around the globe, as demonstrated by Russia’s involvement in the US election and the rise of ISIS recruitment online. As a result, the pressure that governments, media, and civil society are placing on technology companies to take meaningful action to stem the flow of this content is at an all-time high. A recent law passed in Germany will require social media companies like Facebook and Twitter to remove illegal, racist, or slanderous content within 24 hours after it's flagged by a user, or face fines as large as $57 million. Although this legislation was passed overseas, its effects will be felt stateside, as the sites that will bear the brunt of the law are American. Furthermore, while similar legislation in the US is unlikely due to the country's strong First Amendment culture, a recent Canadian court ruling ordered content that violated Canadian law should be deleted globally rather than just for Canadian users, opening the door to extraterritorial regulation that could affect American consumers. Although governments have a legitimate interest in ensuring the safety of their citizens online, laws like this are not the answer. Government legislation is a blunt tool that is likely to compound problems, not solve them. Legislation or regulations requiring companies to remove content pose a range of risks, including potentially legitimizing repressive measures from authoritarian regimes. Hate speech, political propaganda, and extremist content are subjective, and interpretations vary widely among different governments. Relying on governments to create and enforce regulations online affords them the opportunity to define these terms as they see fit. Placing the power in the hands of governments also increases the likelihood that authoritarian regimes that lack Germany's liberal democratic tradition will criminalize online content critical of those governments and, ultimately, create another mechanism for oppressing their own citizens. An individual’s right to freedom of expression is wholly dependent on geography. The internet has provided an unprecedented means for users to share ideas and connect in a manner that transcends borders. This freedom is not unrestricted — and there are valid reasons why certain content, such as child pornography, should have no place on the web. However, imposing hefty financial penalties on internet platforms, as the new German law does, all but ensures that certain companies will err on the side of excessive censorship, unfairly limiting the right to free speech. Government-prompted censorship of this type imposes barriers and cannibalizes the freedoms the internet was designed to provide. The approach taken by this new German law places the primary burden of determining and enforcing the legality of content online onto private companies that host internet platforms. Under this model, these companies will be forced to adopt a quasi-judicial function, which is problematic. The platforms may use rules to police content that lack the clarity, protections, and appellate procedures that the rule of law requires. Instead of government intervention, civil society should recognize and build upon the efforts of platforms that address these issues, while also pressing companies to step up to do even more. Recent affirmative examples of company-led initiatives include Facebook’s hiring of 3,000 more content reviewers to address violent posts on its site and Google’s development of machine-learning systems to identify and remove hate speech and extremist content. YouTube also has implemented a policy whereby violent content that does not meet the company’s community guidelines for removal will be stripped of engagement tools. There is no doubt these companies can, and should, do more. But the future of an open internet and freedom of speech depends on the restraint of governments and a resistance to the idea that they should be dictating content. As an alternative, governments and companies need to utilize the multi-stakeholder model that has helped the internet grow and prosper. Online content from violent extremist groups and foreign governments that use the internet to spread false information and propaganda cause real harm. In that context, companies such as Facebook, Google, Twitter, and Microsoft have an opportunity to work together more closely, as well as with civil society organizations, governments, and academics. Together, these stakeholders need to develop scalable and transparent internal governance structures that will enable them to continue making healthy profits while mitigating the damage done by such content. source

July 25, 2017 by
Researchers have found an unusual piece of malware, called FruitFly, that's been infecting some Mac computers for years. FruitFly operates quietly in the background, spies on users through the computer's camera, captures images of what's displayed on the screen and logs key strokes. Security firm Malwarebytes discovered the first strain earlier this year, but a second version called FruitFly 2 subsequently appeared. Mac users typically think they're immune to malware. But a new strain used for spying reminds us even Macs can be compromised. Patrick Wardle, chief security researcher at security firm Synack, found 400 computers infected with the newer strain and believes there's likely many more cases out there. It's unclear how long FruitFly has been infecting computers, but researchers found the code was modified to work on the Mac Yosemite operating system, which was released in October 2014. This suggests the malware existed before that time. It's unknown who is behind it or how it got on computers. Thomas Reed of Malwarebytes called the first version "unlike anything I've seen before." Wardle says there are multiple strains of FruitFly. The malware has the same spying techniques, but the code is different on each strain. After months of analyzing the new strain, Wardle decrypted parts of the code and set up a server to intercept traffic from infected computers. "Immediately, tons of victims that had been infected with this malware started connecting to me," said Wardle, adding he could see about 400 infected computer names and IP addresses. He believes this reflects only a small subset of infected users. The discovery of FruitFly reminds users that although Mac malware is considerably less widespread than Windows, it still exists. "Mac users are over-confident," Wardle said. "We might not be as careful as we should be on the internet or opening up email attachments." Apple (AAPL, Tech30) did not respond to a request for comment. Mac malware has increased in recent years. According to a report from McAfee, Mac malware skyrocketed in 2016, but most of it was adware -- or malicious advertising -- as opposed to targeted spy campaigns. Wardle said FruitFly is completely new for Macs. He alerted national law enforcement to the malware. The FBI said it does not confirm or deny the existence of investigations. It's unclear how it got on machines and if it targeted individuals randomly or directly. Wardle, a former NSA analyst, ruled out the possibility of a nationstate hacker who targets users to intercept data for cyberespionage. He also doesn't believe it's a criminal using people's data to make money. "I believe its goals were a lot more insidious and sick: spying on people," Wardle said. source