Quantcast
Jump to content


[AI EXPERIENCE] BUILDING ETHICS INTO ARTIFICIAL INTELLIGENCE


Recommended Posts

AI-Ethics-01.jpg

In the second episode of the AI Experience series, we delve into the role of ethics in the evolution of artificial intelligence.

The benefits of AI notwithstanding, there continues to exist concerns about the ethical implications of a technology that could potentially know more about its users than they do about themselves. And for as long as there have been intelligent machines, there have been skepticism and distrust. While some of this wariness could be traced back to the way artificial intelligence has been portrayed in science fiction books and movies, it’s no exaggeration to say that suspicion toward such technology is widespread. In fact, the late Stephen Hawking once said, “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate.”

AI-Ethics-02.pngPhoto Credit: European Commission

To prevent such scenarios from ever coming true, the European Union recently put into place regulations to ensure that “good ethics” are built into all AI technologies. Regarding the new guidelines, executive vice president of the European Commission for A Europe Fit for the Digital Age, said, “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”1

AI-Ethics-03.jpg

Before we delve into ethical AI, it is important to understand how AI learns – and what (and whom) it is learning from – in the first place. Because AI gains insight from data collected from existing societal structures and based on parameters set by human beings – such as researchers and developers – and the companies they work for, it is enevitable that the technology will reflect at least some of the biases, tendencies and preconceptions that exist within those separate but intimately related elements.

AI-Ethics-04.png

“Human centric means to take into consideration the human aspect of how the tools are going to be used, for what purpose, and what’s the consequence for humans who use the tool,” said Yoshua Bengio, 2019 Turing Award-winning AI researcher and founder of Mila (originally Montreal Institute for Learned Algorithms). “It’s important because those tools are becoming more and more powerful, and the more powerful the tool is, the more we need to be careful about how to use it.”

So now let’s take a look at five areas that are integral to the ethical development and use of AI, moving forward: inclusivity, values, governance, data privacy and purpose.

 AI-Ethics-05.jpg

Inclusivity
Acknowledging and factoring in diversity is central to producing AI systems that meet the needs of a diverse global population. However, a study published by in 2019 concluded that there is an alarming lack of diversity in the AI field and that this is perpetuating all kinds of gender, race and religious biases.2

In response to this state of affairs, groups such as the African Institute for Mathematical Science, which launched courses to train young Africans in machine learning and its application to diversify the talent pipeline, are calling for initiatives to achieve a more equitable future for AI. But more consistent and far-reaching efforts are needed if there is to be better cultural and gender representation in the lab and in the industry.

AI-Ethics-06.jpg

“If you don’t have diversity among the people who are doing the designing and the people doing the testing, the people who are involved in the process, then you’re all guaranteed to have a narrow solution,” says Charles Isbell, dean of computing at Georgia Tech and a strong advocate for increasing access to and diversity in higher education.

Values
To a great extent, the values of a nation determines the philosophy behind a country’s AI development. The decoupling of technology in such countries is often made on ethical grounds rooted in fundamental differences in ideologies and values. For example, there is wide disparity in how far governments can intrude upon people’s private lives from country to country.

To alleviate this, private entities should consider universal human values when designing AI systems and be responsible for their products that strongly impact society.

AI-Ethics-07.png

“That’s why it’s not just about maximizing their profits… but taking the responsibility, in the way that the action will have an impact on society,” explains Dr. Yuko Harayama of Japanese scientific research institute, RIKEN, and former executive member of Japan’s Council for Science, Technology and Innovation Cabinet Office. “It’s up to us because we are all human beings and that means you are responsible for your action, including your action within your company.”

Governance
According to a list of 20 AI-enabled crimes put together by researchers at University College London, the biggest threat to civil order comes not from the technology itself, but from humans using it to their own illegal ends. Using driverless vehicles as weapons was among the possible crimes presented in the study.

Historically it has been the role of governments to ensure public safety through regulation and oversight but with AI, lawmakers are faced with the difficulty of legislating a technology that is constantly evolving and challenging to comprehend. Rather than leaving this matter exclusively in the hands of the government, a broad, interdisciplinary effort is required so that any legislation encompasses a wider range of viewpoints and is built on a deeper fundamental understanding of AI.

 AI-Ethics-08.jpg

Data Privacy
AI systems for consumers have relatively few safeguards compared to those designed for industrial or military use, making them more susceptible to personal data breaches or misuse. This is why fostering trust is so important when it comes to human-centric AI systems – trust that users’ data is safe and protected and that it isn’t being used for purposes without owner consent.

AI-Ethics-09.png

“I think the biggest things we need to consider is what data is being collected, who is collecting it, where it is staying, and how it is being used and reused,” said Alex Zafiroglu, deputy director at the 3A Institute of The Australian National University, emphasizing the need for transparency in the use and collection of data for consumer AI solutions.

Purpose
To deliver relevant services and maximum convenience, AI-based systems require users to share certain pieces of personal data. The more personalized the experience provided, the more private data the user has to share.

AI-Ethics-10.jpg

It is critical, then, that the purpose for which this data is used is clearly defined at the outset, and strictly adhered to by the service providers and manufacturers concerned. If AI only employs collected data for a stated purpose, end users would feel less concerned sharing their personal information. This would enable intelligent products and services to deliver more value and the companies that produce them to better fine-tune their offerings.

AI-Ethics-11.jpg

As its presence and uses continue to expand in all areas of life, artificial intelligence presents human society with the potential for incredible advancement. While significant risks exist, these can be effectively mitigated by making ethics the center of all AI development.

# # #

1
2

 

link hidden, please login to view

Link to post
Share on other sites


  • Replies 0
  • Created
  • Last Reply

Top Posters In This Topic

Popular Days

Top Posters In This Topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Topics

    • By News Reporter
      In the fifth episode of the AI Experience series, we look closer at the importance of contextual understanding in the advancement of AI. For more on this topic, visit . Developers and potential partners interested in LG’s ThinQ Platform should check out .
      From AI speaker that recommend movie and TV shows matched to our personal tastes, AI is helping to make daily life more convenient in numerous ways. But we’ve all had that experience when Siri or Alexa responds with the wrong information or doesn’t understand the question. For AI to improve, it needs to better understand context.

      “These limitations have inspired the call for a new phase of AI, which will create a more collaborative partnership between humans and machines,” said David Foster, head of Lyft Transit, Bikes and Scooters. “Contextual AI is technology embedded, understands human context and is capable of interacting with humans.”
      When the “Age of Contextual AI” arrives, human-AI relationship will take a giant leap forward. Machines and services that can understand context in terms of physical space, the end user’s personality and style of communication – as well as myriad other factors that we take for granted, but are central to our ability to comprehend how society and personal interactions work – will become more like partners and more integrated than ever into the way we live.
      Let’s take a look at five different areas that, if successfully navigated, will go a long way toward ensuring that the concept of contextual AI comes to its full fruition.

      Spaces
      Different kinds of spaces come with different sets of rules – some written and some unwritten. Based on those rules, we might dress and act in a certain way and also interpret what someone is saying, or what is going on around us, through a space-specific lens. One could argue that the pandemic has made environmental context even more difficult to discern for AI, with our homes now doubling as offices, classrooms, gyms, movie theaters, etc. This shift raises the importance of creating AI that can not only partially determine meaning or motive through reference to the specific type of space, but also from the nature of the interactions taking place within that space.

      “We have to assume that AI is going to operate in a heterogeneous world with a lot of non AI-friendly consumers and devices or vehicles,” said Foster. “So the ability to be adaptive, predictive and context-aware is going to be key.”
      When AI systems designed for different spaces and areas of our lives eventually converge, there will be serious implications around the exchange of personal data, including the way it is shared, when it is shared and for what purpose. To prevent or reduce risk, it is crucial to include end users in the development process, and to have both developers and policymakers work collaboratively to consider and uncover as many of the consequences as possible of a more integrated and contextually-aware AI network.

      Values
      With any new technology comes new moral considerations. Given the speed at which AI is developing, it is challenging for humans to evaluate whether each action is being performed in a responsible or ethical manner. Therefore, the task of imbuing AI with a set of values that informs its behavior has become particularly pressing.
      While it is ultimately the developers who will ensure that core human values are built into AI systems, the task of choosing what those values are should be not be the domain of any one group. A broad collective that encompasses the full diversity of the modern human experience should be enlisted for this task, as this will help to establish a value set that is more representative of society as a whole and not geared to the interests or beliefs of any single segment.

      Purpose
      To engineer an AI that understands context, including distinctly human principles and patterns of behavior, demands the collection and analysis of significantly more user data. Increasing concerns around privacy in the digital age will undoubtedly result in users opting out of providing potentially sensitive personal information, which will unfortunately hinder AI from reaching its full potential, and individual AI solutions from rendering full value.
      By transparently declaring and detailing the purpose and need for the collection of each data point, companies developing AI can help foster trust in their technologies and in their ethics as a business. If users can see the need for granting access to certain data and a clear benefit from doing so – such as the collection of home appliance usage data to help them achieve a greener, more cost-effective household – they will be more likely to share that information and to perceive the company as an ally in helping them reach their goals.

      Creativity
      AI is already being used in the creative sphere in a variety of interesting ways, from helping to write pop ballads to suggesting creative ideas in filmmaking when IBM’s AI platform, Watson, created the first-ever AI-made movie trailer for 20th Century Fox’s horror film, Morgan. * While such usages represent considerable advancements in AI’s capabilities, experts question the extent to which the technology can develop its own sense of creativity.
      “You can give AI a bunch of training data that says, ‘I consider this beautiful. I don’t consider this beautiful,’” says Arvind Krishna, senior vice president of hybrid cloud and director of IBM Research. “Now, if you ask it to create something beautiful from scratch, I think that’s certainly a more distant and challenging frontier.”
      However, the idea that AI will one day be able to develop its own sense of creativity is essential to the unlocking of its full potential – specifically, the ability to go beyond the input data to develop new ideas and come to new contextual understandings that will enrich our lives and allow us to accelerate towards a brighter future.

      Personality
      Non-verbal cues such as a speaker’s facial expressions, tone of voice and body language play an important role in interpersonal communication and are factors that AI will need to be able to recognize if it is to provide better, more advanced services. However, because non-verbal cues can differ from person to person, the challenge of designing AI systems that can perceive such differences and respond accordingly, is considerable.
      With the development of contextual understanding, AI will become more natural to interact with and more reliable in its ability to provide the information – or perform the tasks – we require of it. As it learns to infer meaning from the situation and surroundings, and the communication style of the end user, artificial intelligence will come closer than ever to being a genuine partner to humankind.
      # # #



      link hidden, please login to view
    • By News Reporter
      With Brazil still being devastated by COVID-19, businesses in the country are looking at the current situation not as a short-term change but a long-term direction. So instead of taking temporary steps to bide time until social distancing is lifted, the Brazil operation of LG Electronics took the step of opening a new facility that enables them to connect with customers while delivering an immersive experience with LG products without violating social distancing rules.

      The LG Business Solutions Center (BSC) in the Barra Funda district on the west side of São Paulo was designed to provide an opportunity to interact with LG’s products in different commercial “zones” such as a Hotel, Hospital, Education, Retail, Meeting Room and Airport all from the safety and comfort of a private showroom. From commercial air conditions to digital signage to transparent OLED displays, the LG BSC is the one-stop location for all things B2B.

      Covering 3,000 square meters, the LG Business Solution Center is thought to be the largest showroom of its kind in Latin America. The LG Business Solution Center also serves as a learning center with an area reserved for training repair staff air conditioning professionals with practical and theoretical instructions on how to repair residential and commercial units. The space can also be reserved for private workshops, seminars and presentations for customers, partners as well as employees.

      But what really sets LG BSC apart from other showrooms is that the products interact with one another because many of LG’s B2B products are also smart and connectable. The showroom also features a space specially developed for the Connected Home utilizing LG ThinQ, the company’s smart devices brand with artificial intelligence.

      “The LG Business Solution Center is one of our biggest projects in the country and is designed to drive the brand’s business with a broad portfolio and services ecosystem,” said Julio Baek, president of LG Electronics Brazil. “Investing in a space where our partners and consumers can experience our products and solutions reinforces LG’s focus on innovation and technology.”
      Contributed by LG Brazil
      # # #

      link hidden, please login to view
    • By Marco Coletti
      TV OLED48CX3LB
      Please, pretty please, make it possibile to listen at the same time through HDMI ARC and some other output that can feed an headphone.
      I have a wireless headphones system that has got both an analog and a digital optical input, yet I am forced to choose headphones or HDMI ARC, but not both.
      There is the option [Wired headphones + Internal speakers], but why should I prefer internal speakers over my 5.1 surround audio system?
      Ideally the audio output menu should be a checklist rather than a list of radio buttons.
    • By ironmonkey
      Just bought an LGLED-65UJ630 65INCH TV. Right away I started to get the message "This app will now restart to free up more memory". After searching the internet and forums, I now understand that this problem is huge, it has been going on for years now and it is right across many models of LG TV's.
      I have tried everything to fix this problem. It has the latest firmware. I did the factory reset and that lasted 4 hours before I got the message back. I turn off the TV and leave it for 10 minutes, maybe one hour later I get the message. I delete browsing history, maybe 10 minutes and I get the message..... you get the message.
      As I understand it, the problem is that LG TV's don't have enough internal memory, but they have shipped a ton of their "smart" TV's with this problem.
      My question is, has anybody heard anything from anybody, anywhere about a solution to this problem. I really want to keep this TV, but it is not fit for purpose in this state.     
    • By sturla
      Hello,
      In the new home screen of webos 6.0 there is a accuweather tile, but i have't find any setting about how to customize the correct weather location. Anyone can help me ?
      Thanks







×
×
  • Create New...