Showing posts with label MACHINE LEARNING. Show all posts
Showing posts with label MACHINE LEARNING. Show all posts

Friday, April 26, 2019

Beware of bots: Automation could wipe out almost half of all jobs in 20 yrs


OECD has highlighted a squeeze on the middle classes, future jobs losses from technology and a widespread dissatisfaction in rich countries.


Business Standard : Automation, robots and globalisation are rapidly changing the workplace and governments must act fast and decisively to counter the effects or face a worsening of social and economic tensions, the OECD warned.

Almost half of all jobs could be wiped out or radically altered in the next two decades due to automation, the Paris-based group said in a report on Tuesday. According to OECD Labor Director Stefano Scarpetta, the pace of change will be “startling.”

Safety nets and training systems built up over decades to protect workers are struggling to keep up with the “megatrends” changing the nature of work, the OECD said.

While some workers will benefit as technology opens new markets and increases productivity, young, low-skilled, part-time and gig-economy workers are vulnerable.
Deep and rapid structural changes are on the horizon, bringing with them major new opportunities but also greater uncertainty among those who are not well equipped to grasp them,” Scarpetta said.

The employment report is the latest OECD warning about risks to governments in advanced economies, which have already manifested themselves in a surge of support for populist political leaders. The organization has highlighted a squeeze on the middle classes, future jobs losses from technology and a widespread dissatisfaction in rich countries.
Changes in employment will hit some workers more than others -- particularly young people with lower levels of education and women who are more likely to be under-employed and working in low paid jobs, the OECD said.

It recommends more training and urges governments to extend protections to workers in the “grey zone,” where a blurring of employment and self-employment often means a lack of rights. The report also warns of “negative ramifications” for social cohesion.

Future of work highlights:
14 percent of jobs could disappear from automation in next 15 to 20 years

32 percent likely to change radically from automation

One in seven workers are self employed, one in nine on temporary contracts

Six out of ten workers lack basic IT skills

Union membership has fallen by almost half in the past three decades




Thursday, January 3, 2019

Like human beings, man-made artificial intelligence also has a dark side


Though AI technology is perceived to be the next big thing, it also seems to have implications that go beyond its generic notion of making life easier for humans.


Artificial Intelligence (AI) is one of the emerging concepts shaping the world of technology today. From analysing big data to powering autonomous vehicles, this technology has created new avenues for humans to experiment and explore. Though the AI is perceived to be the next big thing in technology, it also seems to have implications that go beyond the generic notion of it making life easier for humans.

A research by Stanford and Google has revealed that a machine learning agent that was meant to form street maps from aerial images was found to be hiding information in a clandestine manner to use it later stealthily. This incident came to light when researchers were working on improving the process of turning satellite imagery into Google Maps’ Street View feature, according to a news report in technology and start-up news portal the Tech Crunch.


This is not the first time that an AI-based program has been found to learn new tricks to improvise its work beyond the purpose it was designed for. Last year, Facebook AI Research Lab (FAIR) also had to shut down chatbots powered by the social media giant’s AI-engine after they appeared to have been conversing in a language that only they understood.

The two examples above show that AI has, along with other things, also imbibed a dark side from humans. Though it is designed to work on data sourced from humans, its basic nature to learn, adapt and improvise makes it change its course from what it is designed for to what it is capable of — that can be horrendous and scary.

The AI is also seen to have inherited a sense of bias from humans. In 2016, an investigation by ProPublica, a US-based non-profit organisation, found that an AI-based software (COMPAS) used by judges in some US states to calculate risk score of a person re-committing a crime again, was biased against people of colour.

Such biases can also be seen in today’s smartphones with AI-driven beauty mode, which brightens the frame and considers white tone as beautiful. Maybe this is how the program is designed to work, but there is no denying that the technology has biases, just as humans do.
Business Standar

Monday, September 17, 2018

How the latest technology and some healthy activism can curb fake news


The main take away from our research is that when it comes to preventing the spread of fake news, privacy is key.


The term “fake news” has become ubiquitous over the past two years. The Cambridge English dictionary defines it as “false stories that appear to be news, spread on the internet or using other media, usually created to influence political views or as a joke”.

As part of a global push to curb the spread of deliberate misinformation, researchers are trying to understand what drives people to share fake news and how its endorsement can propagate through a social network.

But humans are complex social animals, and technology misses the richness of human learning and interactions.

That’s why we decided to take a different approach in our research. We used the latest techniques from artificial intelligence to study how support for – or opposition to – a piece of fake news can spread within a social network. We believe our model is more realistic than previous approaches because individuals in our model learn endogenously from their interactions with the environment and not just follow prescribed rules. Our novel approach allowed us to learn a number of new things about how fake news is spread.

The main take away from our research is that when it comes to preventing the spread of fake news, privacy is key. It is important to keep your personal data to yourself and be cautious when providing information to large social media websites or search engines.

The most recent wave of technological innovations has brought us the data-centric web 2.0 and with it a number of fundamental challenges to user privacy and the integrity of news shared in social networks. But as our research shows, there’s reason to be optimistic that technology, paired with a healthy dose of individual activism, might also provide solutions to the scourge of fake news.

Modelling human behaviour

Existing literature models the spread of fake news in a social network in one of two ways.
In the first instance, you could model what happens when people observe what their neighbours do and then use this information in a complicated calculation to optimally update their beliefs about the world.

The second approach assumes that people follow a simple majority rule: everyone does what most of their neighbours do....Read More

Article Source BS