* . *
  • About
  • Advertise
  • Privacy & Policy
  • Contact
Saturday, August 16, 2025
Earth-News
  • Home
  • Business
  • Entertainment
    ‘The Rainmaker’ Premiere: Milo Callaghan Breaks Down Rudy Baylor’s ‘Misguided Valor’ – The Laconia Daily Sun

    Inside ‘The Rainmaker’ Premiere: Milo Callaghan Uncovers the Real Story Behind Rudy Baylor’s Misguided Valor

    Suicide Squad Member Gets New Origin in Absolute Flash – yahoo.com

    Suicide Squad Member Unveiled with Exciting New Origin in Absolute Flash

    I’ll miss the chaos of ‘And Just like That…’ (and Che Diaz too) – yahoo.com

    Why I’ll Truly Miss the Wild Ride of ‘And Just Like That…’ (and Che Diaz!)

    Webtoon Entertainment Stages Recovery With Disney’s Stamp of Approval – The Wall Street Journal

    Webtoon Entertainment Soars to New Heights with Disney’s Stamp of Approval

    Georgia Tech Launches Arts, Entertainment, and Creative Technologies Degree – Georgia Tech News Center

    Georgia Tech Unveils Exciting New Degree in Arts, Entertainment, and Creative Technologies

    John Davison departs from IGN Entertainment – GamesIndustry.biz

    John Davison Steps Down from IGN Entertainment Leadership

  • General
  • Health
  • News

    Cracking the Code: Why China’s Economic Challenges Aren’t Shaking Markets, Unlike America’s” – Bloomberg

    Trump’s Narrow Window to Spread the Truth About Harris

    Trump’s Narrow Window to Spread the Truth About Harris

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Science
  • Sports
  • Technology
    Vermont famers say new technology is changing the state’s agriculture industry – News Channel 3-12

    Vermont Farmers Embrace New Technology Transforming the State’s Agriculture Industry

    Verb Technology Reports Revenue Growth Amidst Strategic Expansions – TipRanks

    Verb Technology Soars with Impressive Revenue Growth Driven by Strategic Expansions

    Midwest Technology Summit held in Fargo – WDAY Radio

    Midwest Technology Summit held in Fargo – WDAY Radio

    K1 Semiconductor Joins Chicago Quantum Exchange To Advance Wafer Technology. – Quantum Zeitgeist

    K1 Semiconductor Partners with Chicago Quantum Exchange to Revolutionize Wafer Technology

    Indirect tax transformation: Navigating change, embracing technology – Thomson Reuters tax and accounting

    Revolutionizing Indirect Tax: Embracing Technology to Navigate Change

    California’s wildfire moonshot: How new technology will defeat advancing flames – Los Angeles Times

    California’s Wildfire Revolution: How Cutting-Edge Technology Is Poised to Stop Raging Flames

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
No Result
View All Result
  • Home
  • Business
  • Entertainment
    ‘The Rainmaker’ Premiere: Milo Callaghan Breaks Down Rudy Baylor’s ‘Misguided Valor’ – The Laconia Daily Sun

    Inside ‘The Rainmaker’ Premiere: Milo Callaghan Uncovers the Real Story Behind Rudy Baylor’s Misguided Valor

    Suicide Squad Member Gets New Origin in Absolute Flash – yahoo.com

    Suicide Squad Member Unveiled with Exciting New Origin in Absolute Flash

    I’ll miss the chaos of ‘And Just like That…’ (and Che Diaz too) – yahoo.com

    Why I’ll Truly Miss the Wild Ride of ‘And Just Like That…’ (and Che Diaz!)

    Webtoon Entertainment Stages Recovery With Disney’s Stamp of Approval – The Wall Street Journal

    Webtoon Entertainment Soars to New Heights with Disney’s Stamp of Approval

    Georgia Tech Launches Arts, Entertainment, and Creative Technologies Degree – Georgia Tech News Center

    Georgia Tech Unveils Exciting New Degree in Arts, Entertainment, and Creative Technologies

    John Davison departs from IGN Entertainment – GamesIndustry.biz

    John Davison Steps Down from IGN Entertainment Leadership

  • General
  • Health
  • News

    Cracking the Code: Why China’s Economic Challenges Aren’t Shaking Markets, Unlike America’s” – Bloomberg

    Trump’s Narrow Window to Spread the Truth About Harris

    Trump’s Narrow Window to Spread the Truth About Harris

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Science
  • Sports
  • Technology
    Vermont famers say new technology is changing the state’s agriculture industry – News Channel 3-12

    Vermont Farmers Embrace New Technology Transforming the State’s Agriculture Industry

    Verb Technology Reports Revenue Growth Amidst Strategic Expansions – TipRanks

    Verb Technology Soars with Impressive Revenue Growth Driven by Strategic Expansions

    Midwest Technology Summit held in Fargo – WDAY Radio

    Midwest Technology Summit held in Fargo – WDAY Radio

    K1 Semiconductor Joins Chicago Quantum Exchange To Advance Wafer Technology. – Quantum Zeitgeist

    K1 Semiconductor Partners with Chicago Quantum Exchange to Revolutionize Wafer Technology

    Indirect tax transformation: Navigating change, embracing technology – Thomson Reuters tax and accounting

    Revolutionizing Indirect Tax: Embracing Technology to Navigate Change

    California’s wildfire moonshot: How new technology will defeat advancing flames – Los Angeles Times

    California’s Wildfire Revolution: How Cutting-Edge Technology Is Poised to Stop Raging Flames

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
No Result
View All Result
Earth-News
No Result
View All Result
Home Technology

Women in AI: Ewa Luger explores how AI affects culture — and vice versa

April 21, 2024
in Technology
Women in AI: Ewa Luger explores how AI affects culture — and vice versa
Share on FacebookShare on Twitter

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Ewa Luger is co-director at the Institute of Design Informatics, and co-director of the Bridging Responsible AI Divides (BRAID) program, backed by the Arts and Humanities Research Council (AHRC). She works closely with policymakers and industry, and is a member of the U.K. Department for Culture, Media and Sport (DCMS) college of experts, a cohort of experts who provide scientific and technical advice to the DCMS.

Luger’s research explores social, ethical and interactional issues in the context of data-driven systems, including AI systems, with a particular interest in design, the distribution of power, spheres of exclusion, and user consent. Previously, she was a fellow at the Alan Turing Institute, served as a researcher at Microsoft, and was a fellow at Corpus Christi College at the University of Cambridge.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

After my PhD, I moved to Microsoft Research, where I worked in the user experience and design group in the Cambridge (U.K.) lab. AI was a core focus there, so my work naturally developed more fully into that area and expanded out into issues surrounding human-centered AI (e.g., intelligent voice assistants).

When I moved to the University of Edinburgh, it was due to a desire to explore issues of algorithmic intelligibility, which, back in 2016, was a niche area. I’ve found myself in the field of responsible AI and currently jointly lead a national program on the subject, funded by the AHRC.

What work are you most proud of in the AI field?

My most-cited work is a paper about the user experience of voice assistants (2016). It was the first study of its kind and is still highly cited. But the work I’m personally most proud of is ongoing. BRAID is a program I jointly lead, and is designed in partnership with a philosopher and ethicist. It’s a genuinely multidisciplinary effort designed to support the development of a responsible AI ecosystem in the U.K.

In partnership with the Ada Lovelace Institute and the BBC, it aims to connect arts and humanities knowledge to policy, regulation, industry and the voluntary sector. We often overlook the arts and humanities when it comes to AI, which has always seemed bizarre to me. When COVID-19 hit, the value of the creative industries was so profound; we know that learning from history is critical to avoid making the same mistakes, and philosophy is the root of the ethical frameworks that have kept us safe and informed within medical science for many years. Systems like Midjourney rely on artist and designer content as training data, and yet somehow these disciplines and practitioners have little to no voice in the field. We want to change that.

More practically, I’ve worked with industry partners like Microsoft and the BBC to co-produce responsible AI challenges, and we’ve worked together to find academics that can respond to those challenges. BRAID has funded 27 projects so far, some of which have been individual fellowships, and we have a new call going live soon.

We’re designing a free online course for stakeholders looking to engage with AI, setting up a forum where we hope to engage a cross-section of the population as well as other sectoral stakeholders to support governance of the work — and helping to explode some of the myths and hyperbole that surrounds AI at the moment.

I know that kind of narrative is what floats the current investment around AI, but it also serves to cultivate fear and confusion among those people who are most likely to suffer downstream harms. BRAID runs until the end of 2028, and in the next phase, we’ll be tackling AI literacy, spaces of resistance, and mechanisms for contestation and recourse. It’s a (relatively) large program at £15.9 million over six years, funded by the AHRC.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

That’s an interesting question. I’d start by saying that these issues aren’t solely issues found in industry, which is often perceived to be the case. The academic environment has very similar challenges with respect to gender equality. I’m currently co-director of an institute — Design Informatics — that brings together the school of design and the school of informatics, and so I’d say there’s a better balance both with respect to gender and with respect to the kinds of cultural issues that limit women reaching their full professional potential in the workplace.

But during my PhD, I was based in a male-dominated lab and, to a lesser extent, when I worked in industry. Setting aside the obvious effects of career breaks and caring, my experience has been of two interwoven dynamics. Firstly, there are much higher standards and expectations placed on women — for example, to be amenable, positive, kind, supportive, team-players and so on. Secondly, we’re often reticent when it comes to putting ourselves forward for opportunities that less-qualified men would quite aggressively go for. So I’ve had to push myself quite far out of my comfort zone on many occasions.

The other thing I’ve needed to do is to set very firm boundaries and learn when to say no. Women are often trained to be (and seen as) people pleasers. We can be too easily seen as the go-to person for the kinds of tasks that would be less attractive to your male colleagues, even to the extent of being assumed to be the tea-maker or note-taker in any meeting, irrespective of professional status. And it’s only really by saying no, and making sure that you’re aware of your value, that you ever end up being seen in a different light. It’s overly generalizing to say that this is true of all women, but it has certainly been my experience. I should say that I had a female manager while I was in industry, and she was wonderful, so the majority of sexism I’ve experienced has been within academia.

Overall, the issues are structural and cultural, and so navigating them takes effort — firstly in making them visible and secondly in actively addressing them. There are no simple fixes, and any navigation places yet more emotional labor on females in tech.

What advice would you give to women seeking to enter the AI field?

My advice has always been to go for opportunities that allow you to level up, even if you don’t feel that you’re 100% the right fit. Let them decline rather than you foreclosing opportunities yourself. Research shows that men go for roles they think they could do, but women only go for roles they feel they already can or are doing competently. Currently, there’s also a trend toward more gender awareness in the hiring process and among funders, although recent examples show how far we have to go.

If you look at U.K. Research and Innovation AI hubs, a recent high-profile, multi-million-pound investment, all of the nine AI research hubs announced recently are led by men. We should really be doing better to ensure gender representation.

What are some of the most pressing issues facing AI as it evolves?

Given my background, it’s perhaps unsurprising that I’d say that the most pressing issues facing AI are those related to the immediate and downstream harms that might occur if we’re not careful in the design, governance and use of AI systems.

The most pressing issue, and one that has been heavily under-researched, is the environmental impact of large-scale models. We might choose at some point to accept those impacts if the benefits of the application outweigh the risks. But right now, we’re seeing widespread use of systems like Midjourney run simply for fun, with users largely, if not completely, unaware of the impact each time they run a query.

Another pressing issue is how we reconcile the speed of AI innovations and the ability of the regulatory climate to keep up. It’s not a new issue, but regulation is the best instrument we have to ensure that AI systems are developed and deployed responsibly.

It’s very easy to assume that what has been called the democratization of AI — by this, I mean systems such as ChatGPT being so readily available to anyone — is a positive development. However, we’re already seeing the effects of generated content on the creative industries and creative practitioners, particularly regarding copyright and attribution. Journalism and news producers are also racing to ensure their content and brands are not affected. This latter point has huge implications for our democratic systems, particularly as we enter key election cycles. The effects could be quite literally world-changing from a geopolitical perspective. It also wouldn’t be a list of issues without at least a nod to bias.

What are some issues AI users should be aware of?

Not sure if this relates to companies using AI or regular citizens, but I’m assuming the latter. I think the main issue here is trust. I’m thinking, here, of the many students now using large language models to generate academic work. Setting aside the moral issues, the models are still not good enough for that. Citations are often incorrect or out of context, and the nuance of some academic papers is lost.

But this speaks to a wider point: You can’t yet fully trust generated text and so should only use those systems when the context or outcome is low risk. The obvious second issue is veracity and authenticity. As models become increasingly sophisticated, it’s going to be ever harder to know for sure whether it’s human or machine-generated. We haven’t yet developed, as a society, the requisite literacies to make reasoned judgments about content in an AI-rich media landscape. The old rules of media literacy apply in the interim: Check the source.

Another issue is that AI is not human intelligence, and so the models aren’t perfect — they can be tricked or corrupted with relative ease if one has a mind to.

What is the best way to responsibly build AI?

The best instruments we have are algorithmic impact assessments and regulatory compliance, but ideally, we’d be looking for processes that actively seek to do good rather than just seeking to minimize risk.

Going back to basics, the obvious first step is to address the composition of designers — ensuring that AI, informatics and computer science as disciplines attract women, people of color and representation from other cultures. It’s obviously not a quick fix, but we’d clearly have addressed the issue of bias earlier if it was more heterogeneous. That brings me to the issue of the data corpus, and ensuring that it’s fit-for-purpose and efforts are made to appropriately de-bias it.

Then there comes the need to train systems architects to be aware of moral and socio-technical issues — placing the same weight on these as we do the primary disciplines. Then we need to give systems architects more time and agency to consider and fix any potential issues. Then we come to the matter of governance and co-design, where stakeholders should be involved in the governance and conceptual design of the system. And finally, we need to thoroughly stress-test systems before they get anywhere near human subjects.

Ideally, we should also be ensuring that there are mechanisms in place for opt-out, contestation and recourse — though much of this is covered by emerging regulations. It seems obvious, but I’d also add that you should be prepared to kill a project that’s set to fail on any measure of responsibility. There’s often something of the fallacy of sunk costs at play here, but if a project isn’t developing as you’d hope, then raising your risk tolerance rather than killing it can result in the untimely death of a product.

The European Union’s recently adopted AI act covers much of this, of course.

How can investors better push for responsible AI?

Taking a step back here, it’s now generally understood and accepted that the whole model that underpins the internet is the monetization of user data. In the same way, much, if not all, of AI innovation is driven by capital gain. AI development in particular is a resource-hungry business, and the drive to be the first to market has often been described as an arms race. So, responsibility as a value is always in competition with those other values.

That’s not to say that companies don’t care, and there has also been much effort made by various AI ethicists to reframe responsibility as a way of actually distinguishing yourself in the field. But this feels like an unlikely scenario unless you’re a government or another public service. It’s clear that being the first to market is always going to be traded off against a full and comprehensive elimination of possible harms.

But coming back to the term responsibility. To my mind, being responsible is the least we can do. When we say to our kids that we’re trusting them to be responsible, what we mean is, don’t do anything illegal, embarrassing or insane. It’s literally the basement when it comes to behaving like a functioning human in the world. Conversely, when applied to companies, it becomes some kind of unreachable standard. You have to ask yourself, how is this even a discussion that we find ourselves having?

Also, the incentives to prioritize responsibility are pretty basic and relate to wanting to be a trusted entity while also not wanting your users to come to newsworthy harm. I say this because plenty of people at the poverty line, or those from marginalized groups, fall below the threshold of interest, as they don’t have the economic or social capital to contest any negative outcomes, or to raise them to public attention.

So, to loop back to the question, it depends on who the investors are. If it’s one of the big seven tech companies, then they’re covered by the above. They have to choose to prioritize different values at all times, and not only when it suits them. For the public or third sector, responsible AI is already aligned to their values, and so what they tend to need is sufficient experience and insight to help make the right and informed choices. Ultimately, to push for responsible AI requires an alignment of values and incentives.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechCrunch – https://techcrunch.com/2024/04/20/women-in-ai-ewa-luger-studies-how-ai-impacts-culture-and-vice-versa/

Tags: Lugertechnologywomen
Previous Post

Arsenal legend Kim Little was mistaken for another player’s ‘pal’ before club debut

Next Post

Boston Dynamics unveils a new robot, controversy over MKBHD, and layoffs at Tesla

Box, run, crash: China’s humanoid robot games show advances and limitations – The Guardian

Box, Run, Crash: Inside China’s Humanoid Robot Games Revealing Stunning Progress and Surprising Challenges

August 16, 2025
Customers look set to bear the tariff cost burden – Axios

Rising Tariff Costs: How They Impact Your Wallet and What You Can Do

August 16, 2025
‘The Rainmaker’ Premiere: Milo Callaghan Breaks Down Rudy Baylor’s ‘Misguided Valor’ – The Laconia Daily Sun

Inside ‘The Rainmaker’ Premiere: Milo Callaghan Uncovers the Real Story Behind Rudy Baylor’s Misguided Valor

August 16, 2025
NC state employee and teacher reps say health insurance increases will hurt worker retention – NC Newsline

Rising Health Insurance Costs Jeopardize Retention of State Employees and Teachers

August 16, 2025
DC police to share information with federal immigration officers – CNN

DC Police to Collaborate with Federal Immigration Officers in New Information Sharing Initiative

August 16, 2025
China’s Ecological Civilization Shaping a Sustainable Future – 中国科技网

China’s Ecological Civilization Shaping a Sustainable Future – 中国科技网

August 16, 2025
NVIDIA, National Science Foundation Support Ai2 Development of Open AI Models to Drive US Scientific Leadership – NVIDIA Blog

NVIDIA, National Science Foundation Support Ai2 Development of Open AI Models to Drive US Scientific Leadership – NVIDIA Blog

August 16, 2025
Boise State plans to build new science research building to help with capacity needs – KTVB

Boise State Unveils Plans for New Science Research Building to Boost Capacity

August 16, 2025
Why Some Physicians Still Lead With Lifestyle-First Obesity Care Despite the GLP-1 Revolution – Medscape

Why Many Physicians Still Champion Lifestyle-First Strategies in Obesity Care Despite the GLP-1 Revolution

August 16, 2025
Vermont famers say new technology is changing the state’s agriculture industry – News Channel 3-12

Vermont Farmers Embrace New Technology Transforming the State’s Agriculture Industry

August 16, 2025

Categories

Archives

August 2025
MTWTFSS
 123
45678910
11121314151617
18192021222324
25262728293031
« Jul    
Earth-News.info

The Earth News is an independent English-language daily published Website from all around the World News

Browse by Category

  • Business (20,132)
  • Ecology (773)
  • Economy (796)
  • Entertainment (21,673)
  • General (16,494)
  • Health (9,834)
  • Lifestyle (806)
  • News (22,149)
  • People (797)
  • Politics (803)
  • Science (16,008)
  • Sports (21,293)
  • Technology (15,775)
  • World (778)

Recent News

Box, run, crash: China’s humanoid robot games show advances and limitations – The Guardian

Box, Run, Crash: Inside China’s Humanoid Robot Games Revealing Stunning Progress and Surprising Challenges

August 16, 2025
Customers look set to bear the tariff cost burden – Axios

Rising Tariff Costs: How They Impact Your Wallet and What You Can Do

August 16, 2025
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2023 earth-news.info

No Result
View All Result

© 2023 earth-news.info

No Result
View All Result

© 2023 earth-news.info

Go to mobile version