By kumar rajan
lg should provide google play store and works as android system like mobile works and support all app downloaded by play store.
By News Reporter
In this segment of On the Job, we take a look at user experience (UX) designers and the crucial role they play in LG’s Vehicle component Solutions Company.
The future of mobility is fast approaching, with software predicted to account for 90 percent of all vehicle-related innovation within a decade.1 According to some estimates, around 461 million vehicles will have been equipped with digital head units – the control center for an automobile’s information and entertainment center – and 115 million with digital cockpit architecture, between the years 2020 and 2030.2
And it’s software that gives digital dashboard displays, head units, cockpits among other in-vehicle systems, their ability to enhance the overall driver and passenger experience. And as an expert in both plastic OLED (P-OLED) and user experiences (UX), LG is helping to shape the future of mobility with advanced in-vehicle digital displays.
As the automotive interior becomes more complex and more informative, the role of the UX designer is growing in importance. While UX design may not be as well-known as a car’s exterior design, there is no question that it’s one of the most dynamic fields related to design today with much room to evolve. Currently, the UX in vehicles relates to three main elements: traditional driver clusters such as speedometers and odometers; rear-seat entertainment (RSE); audio, video and navigation (AVN).
From left: Park Ji-yeong, Yoo Ah-yeon , Ko Seung-yeon, Oh Ji-won, Jeong Hye-in, Ahn Jong-yoon
From concept to production, the development cycle of a car today takes years. The whole time, LG’s UX designers are involved every step of the way, considering every angle, every possible experience. Safety and convenience are of the greatest importance to LG UX designers, especially when it relates to AVN and driver cluster displays.
Details from font, text size, screen brightness and visibility at different times of the day to accessibility of commonly-used functions is factored in to produce an optimized graphical user interface (GUI). And because drivers’ preferences differ widely, both physical controls must coexist seamlessly with more flexible touch displays without confusing the vehicle’s occupants.
But beyond usability and safety, LG’s in-car displays must also deliver exceptional style. The integrated infotainment system must exude a sleek, modern aesthetic that complements the interior and makes the driver and passengers feel as though they’ve stepped into a luxurious cockpit.
But a UX designer’s role doesn’t just stop at the drawing board or the CAD system. LG UX designers are also well versed in customer service.
“While working on the digital cockpit for a client, I took up residence in their offices for many months because the collaboration with our clients doesn’t simply end with the design,” said Park Ji-yeong, senior UX designer at LG Electronics. “Automobile software is updated for years, even for models launched almost a decade ago, so we make it a priority to follow up with our clients regularly.”
As UX designers at LG work to advance the in-vehicle experience, the information and entertainment technology inside the vehicle cabin will continue to evolve and become more complex. So it’s a fairly sure bet that the job of a vehicle user experience designer will only be more vital to the driving experience, even if the driving is mostly done by the car.
# # #
link hidden, please login to view
By News Reporter
In the second episode of the AI Experience series, we delve into the role of ethics in the evolution of artificial intelligence.
The benefits of AI notwithstanding, there continues to exist concerns about the ethical implications of a technology that could potentially know more about its users than they do about themselves. And for as long as there have been intelligent machines, there have been skepticism and distrust. While some of this wariness could be traced back to the way artificial intelligence has been portrayed in science fiction books and movies, it’s no exaggeration to say that suspicion toward such technology is widespread. In fact, the late Stephen Hawking once said, “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate.”
Photo Credit: European Commission
To prevent such scenarios from ever coming true, the European Union recently put into place regulations to ensure that “good ethics” are built into all AI technologies. Regarding the new guidelines, executive vice president of the European Commission for A Europe Fit for the Digital Age, said, “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”1
Before we delve into ethical AI, it is important to understand how AI learns – and what (and whom) it is learning from – in the first place. Because AI gains insight from data collected from existing societal structures and based on parameters set by human beings – such as researchers and developers – and the companies they work for, it is enevitable that the technology will reflect at least some of the biases, tendencies and preconceptions that exist within those separate but intimately related elements.
“Human centric means to take into consideration the human aspect of how the tools are going to be used, for what purpose, and what’s the consequence for humans who use the tool,” said Yoshua Bengio, 2019 Turing Award-winning AI researcher and founder of Mila (originally Montreal Institute for Learned Algorithms). “It’s important because those tools are becoming more and more powerful, and the more powerful the tool is, the more we need to be careful about how to use it.”
So now let’s take a look at five areas that are integral to the ethical development and use of AI, moving forward: inclusivity, values, governance, data privacy and purpose.
Acknowledging and factoring in diversity is central to producing AI systems that meet the needs of a diverse global population. However, a study published by in 2019 concluded that there is an alarming lack of diversity in the AI field and that this is perpetuating all kinds of gender, race and religious biases.2
In response to this state of affairs, groups such as the African Institute for Mathematical Science, which launched courses to train young Africans in machine learning and its application to diversify the talent pipeline, are calling for initiatives to achieve a more equitable future for AI. But more consistent and far-reaching efforts are needed if there is to be better cultural and gender representation in the lab and in the industry.
“If you don’t have diversity among the people who are doing the designing and the people doing the testing, the people who are involved in the process, then you’re all guaranteed to have a narrow solution,” says Charles Isbell, dean of computing at Georgia Tech and a strong advocate for increasing access to and diversity in higher education.
To a great extent, the values of a nation determines the philosophy behind a country’s AI development. The decoupling of technology in such countries is often made on ethical grounds rooted in fundamental differences in ideologies and values. For example, there is wide disparity in how far governments can intrude upon people’s private lives from country to country.
To alleviate this, private entities should consider universal human values when designing AI systems and be responsible for their products that strongly impact society.
“That’s why it’s not just about maximizing their profits… but taking the responsibility, in the way that the action will have an impact on society,” explains Dr. Yuko Harayama of Japanese scientific research institute, RIKEN, and former executive member of Japan’s Council for Science, Technology and Innovation Cabinet Office. “It’s up to us because we are all human beings and that means you are responsible for your action, including your action within your company.”
According to a list of 20 AI-enabled crimes put together by researchers at University College London, the biggest threat to civil order comes not from the technology itself, but from humans using it to their own illegal ends. Using driverless vehicles as weapons was among the possible crimes presented in the study.
Historically it has been the role of governments to ensure public safety through regulation and oversight but with AI, lawmakers are faced with the difficulty of legislating a technology that is constantly evolving and challenging to comprehend. Rather than leaving this matter exclusively in the hands of the government, a broad, interdisciplinary effort is required so that any legislation encompasses a wider range of viewpoints and is built on a deeper fundamental understanding of AI.
AI systems for consumers have relatively few safeguards compared to those designed for industrial or military use, making them more susceptible to personal data breaches or misuse. This is why fostering trust is so important when it comes to human-centric AI systems – trust that users’ data is safe and protected and that it isn’t being used for purposes without owner consent.
“I think the biggest things we need to consider is what data is being collected, who is collecting it, where it is staying, and how it is being used and reused,” said Alex Zafiroglu, deputy director at the 3A Institute of The Australian National University, emphasizing the need for transparency in the use and collection of data for consumer AI solutions.
To deliver relevant services and maximum convenience, AI-based systems require users to share certain pieces of personal data. The more personalized the experience provided, the more private data the user has to share.
It is critical, then, that the purpose for which this data is used is clearly defined at the outset, and strictly adhered to by the service providers and manufacturers concerned. If AI only employs collected data for a stated purpose, end users would feel less concerned sharing their personal information. This would enable intelligent products and services to deliver more value and the companies that produce them to better fine-tune their offerings.
As its presence and uses continue to expand in all areas of life, artificial intelligence presents human society with the potential for incredible advancement. While significant risks exist, these can be effectively mitigated by making ethics the center of all AI development.
# # #
link hidden, please login to view