In an era marked by the rapid evolution of technology, the integration of artificial intelligence (AI) holds immense promise for small and medium enterprises (SMEs) seeking innovation, efficiency, and competitive advantages.
However, as SMEs embark on the journey of AI adoption, they are confronted with a myriad of ethical challenges that demand careful consideration. From data privacy concerns to algorithmic biases, the ethical landscape surrounding AI is complex and dynamic.
In this exploration, we delve into the fundamental question: How can your SME navigate the ethical challenges inherent in AI adoption? Unravelling this question is essential for SMEs to harness the transformative power of AI and do so responsibly, ensuring that ethical considerations are at the forefront of technological advancement.
In this week’s edition of Let’s Talk, our experts undertake a reflective journey into the ethical dimensions that small and medium enterprises (SMEs) must navigate in the era of artificial intelligence.
Let’s Talk.
Discover more Let’s Talk Business episodes
Contribute to Dynamic Business
Craig Nielsen, Vice President, APAC at GitLab
Craig Nielsen, Vice President, APAC at GitLab
“The ethical use of AI requires guardrails to be in place for it to be implemented responsibly — for organisations and their customers. The top two concerns we hear from customers are about copyright / IP protection and security. The GitLab Global DevSecOps Report showed 48% of respondents are concerned that code generated using AI may not be subject to the same copyright protection as human-generated code, and 42% worry that code generated using AI may introduce security vulnerabilities.
“Without considering how AI tools store and protect proprietary corporate, customer and partner data, organisations may be vulnerable to security risks, fines, customer attrition and reputational damage. This is especially important for organisations in highly regulated environments, such as public sector, financial services or health care, that must adhere to strict external regulatory and compliance obligations.
“To ensure intellectual property is contained and protected, organisations must create strict policies outlining the approved usage of AI tools and services. When incorporating third-party platforms for AI, organisations should conduct a thorough due diligence assessment ensuring that their data, both the model prompt and output, will not be used for AI/ML model training and fine-tuning, which may inadvertently expose their intellectual property to other organisations.”
Dr Hoon Wei Lim, Principal Director, Cyber Special Ops – R&D at NCS
Dr Hoon Wei Lim, Principal Director, Cyber Special Ops – R&D at NCS
“When SMEs employ AI technologies like ChatGPT, ethical considerations are paramount. SMEs must be vigilant about data privacy and security, ensuring sensitive information is used with explicit consent and protected through robust anonymization and data handling protocols. Accuracy and reliability of AI outputs must be guaranteed through regular validation, audits, and user feedback to prevent misinformation and ensure trustworthiness. SMEs should also adhere to legal compliance, respecting copyright, privacy laws, and intellectual property rights to avoid legal repercussions.
“Transparency and explainability are vital; SMEs should strive for AI systems that can elucidate their decision-making processes, enabling users to understand and trust AI outputs. Mitigating bias is crucial; AI should be routinely assessed and corrected for any inherent biases to ensure fairness and prevent discriminatory practices. By integrating these ethical considerations into their operations, SMEs can leverage AI responsibly, fostering innovation while minimizing potential risks and ensuring equitable benefits for all stakeholders.”
Brad Drysdale, Field CTO at SnapLogic
Brad Drysdale, Field CTO at SnapLogic
“The Hidden Woman sheds light on the pervasive bias ingrained in our society. Everything from room temperatures to the design of everyday items caters predominantly to male preferences. This bias extends beyond the digital realm, emphasising that it’s not just an internet issue.
“For SMEs relying on ChatGPT and similar language models for customer interactions, there’s a risk of unintentional bias affecting their brand. This could lead to integrity issues and compromise the brand’s reputation.
“SMEs need to ask — can this data be trusted? Is it secure? Is it ethical? Is it biased?
“Even if all of these things were accounted for, AI models can hallucinate, generating content they believe to be true. This poses a risk, as generative models, when untethered, may produce biased or incorrect data and further train on this false information, perpetuating the problem.
“There’s a lot of public concern in the industry on how to deal with that. However, I think regulation and good hygiene around how we accept this technology and use it is the only way forward. You can’t stop it or slow it down.”
David Fairman, Chief Information and Security Officer APAC at Netskope
David Fairman, Chief Information and Security Officer APAC at Netskope
“If we consider generative AI such as ChatGPT, SMEs should ensure that sensitive company information, especially of customers and external stakeholders, is not used, because those tools do not keep secrets, and incidents happen to even the largest organisations.
“SMEs also consume AI via the business applications they use, and should consider the ethical aspects as well. For example, if a recruitment tool they use to screen candidates with AI is proved to be biased, what ramifications may that have? Some jurisdictions have started penalising companies using biased AI, which may become an AI regulation standard in the future.
“Finally, with its democratisation, SMEs and startups will have more opportunities to build AI themselves, and the following “Responsible AI” principles provide a good ethical framework:
Security and privacy covers the secure use and development of AI, protection of AI in production, and protection from AI attacks
Transparency: ensure the black box decisions of AI can be explained and justified
Ensuring AI’s fairness and the absence of bias
Inclusiveness: various stakeholders and teams within the business should be involved in the design and oversight process
Defining ownership and accountability within the organization for each AI in use.”
Elise Balsillie, Head of Thryv Australia
Elise Balsillie, Head of Thryv Australia
“For small businesses, AI has gained interest as a powerful and cost-effective tool, particularly when it comes to marketing. It can help with building brand awareness and garnering valuable insights into customer behaviours, which ultimately can lead to customer sales and business growth.
“But as the AI arena becomes more mainstream, it’s important for small business owners to consider some of the challenges that can also come with the adoption of AI.
“To help educate small business owners, here are my top four challenges to consider:
Data security. With the increasing use of AI, data security becomes a significant concern. Small businesses need to protect sensitive customer data and intellectual property.
AI Ethics. As Gen AI becomes more powerful, ethical considerations become paramount. Small businesses must watch out for potential biases in AI models and ensure they are used responsibly.
Regulatory awareness. Stay up-to-date with AI regulations and data privacy laws. Gen AI can raise ethical and legal concerns, so compliance is crucial.
Transparency with vendors is important. If your vendor is using AI, ask them what they are doing with AI, so that you are across it.
“By staying informed, focusing on ethics and security, and being prepared for changes, small businesses can successfully navigate the evolving landscape of Gen AI and leverage its potential to drive innovation and growth in their businesses. An ethical approach can also enhance brand reputation and innovation.”
John McCloskey, Managing Director, ANZ at Lenovo ISG
John McCloskey, Managing Director, ANZ at Lenovo ISG
“Integrating AI into business processes offers immense opportunities but also raises ethical concerns that demand careful attention. SMEs must be transparent about AI, listen to concerns, and involve stakeholders in decisions to build trust and handle ethical issues better. Some other considerations are:
Optimizing AI for Enhanced Workflow Efficiency, Not Workforce Replacement – AI alleviates time spent on monotonous tasks, opening opportunities for innovative thinking and focusing on more value-add-based business tasks. However, over-reliance on AI, rather than investing in human capital and skills development can hinder sustainable growth.
How AI bias can impact decision-making – Using AI comes with challenges, including ensuring data quality, addressing algorithmic biases, and maintaining ethical standards. AI systems are not immune to biases since they tap on historical data which is ‘fed’ into them in the form of algorithms, by humans. Algorithms are opinions written in code, potentially leading to unfair discriminatory practices.
Transparency, accountability and responsibility – AI decisions are not unfailing and require human intervention or oversight. Maintaining transparent data collection practices, obtaining proper consent, and deploying robust security measures are imperative. In fact, this is a shared responsibility between public and private organizations. Transparency in AI is essential for building trust and accountability.
“Collaboration and knowledge-sharing within the industry on best ethical practices in AI can significantly benefit SMEs, fostering an ecosystem of responsible AI adoption while addressing ethical challenges.”
Julian Fayad, Founder and CEO of LoanOptions.ai
Julian Fayad, Founder and CEO of LoanOptions.ai
“As the landscape of artificial intelligence continues to evolve exponentially, we as business owners have an important ethical responsibility, especially when it comes to handling customer data. Whether we’re developing our own AI systems or leveraging third-party services, it’s imperative to prioritise using de-identified data over specific customer details. This approach maintains privacy and safeguards against misuse of personal information.
“Building our own AI models brings additional challenges, particularly the risk of ingraining biases within these systems. Our customer data sets are reflective of our existing clientele and, without careful oversight, could potentially marginalise other groups. It’s a responsibility we take seriously, ensuring our technology serves all customers to the best of our ability.
“In the sphere of privacy and transparency, we’re diligent about openly communicating our practices. We believe it’s crucial to be transparent about the collection and use of data in training our AI models. It’s not just about regulatory compliance; it’s about earning the trust of our customers and the community at large. By addressing these issues head-on, we can harness the potential of AI ethically and effectively, fostering innovation that respects individual rights and societal values.”
Leola Small, Founder & Managing Director at Small Mktg
Leola Small, Founder & Managing Director at Small Mktg
“The potential for AI to transform business is undeniable, yet its deployment demands a conscientious understanding of ethical implications. When SMEs introduce AI in their operations, several ethical challenges and considerations arise.
“Transparency and accountability stand at the forefront of ethical AI use. SMEs must ensure customers are fully informed about how their data is gathered, analysed, and employed. This clarity in data usage policies is not just a legal imperative but a cornerstone of trust and brand integrity.
“Equally important is the need to eliminate biases in AI algorithms. Often, the datasets training these models inadvertently reflect societal biases, leading to discriminatory outcomes. SMEs have a responsibility to scrutinise their data sources and employ effective bias mitigation strategies.
“An often-overlooked ethical consideration for SMEs is the broader impact of AI on market dynamics. AI can significantly alter market competition, potentially leading to monopolistic scenarios or the marginalisation of smaller players who lack AI capabilities. SMEs should strive for innovation that promotes healthy competition and market diversity rather than contributing to market imbalances.”
Paul Warren-Tape, SVP Global Risk & Compliance at IDVerse
Paul Warren-Tape, SVP Global Risk & Compliance at IDVerse
“AI-powered solutions can noticeably boost efficiency for businesses without adding significant operational costs. For SMEs, this is particularly appealing as they go through a fast-paced growing phase. However, it is worth noting that even though AI-powered solutions are no longer novelties, there are still limitations as the commercial application matures.
“AI-powered identity verification solutions, for example, improve onboarding time and overall customer experience when integrated properly into business workflow. However, not all AI is created equal and there can be bias within the AI algorithm. This will hinder a natural customer experience that is accessible to all as customers with different skin tones, devices, access to the internet, and ID documents in different languages should just be recognised as everyone else. At IDVerse, we built our Zero Bias™ AI from scratch to ensure that our technology is built for a world inclusive of all the real customers, and no one else.
“The seeming simplicity of AI technology can sometimes overshadow compliance gaps. As SMEs scale up, they need toolsets that meet industry best practice and are able to keep up with the compliance needs of their growing business whilst preserving the privacy of their customers. As an identity verification provider, we are proud of being the first private company in Australia that was accredited under the government’s Trusted Digital Identity Framework at the highest level. This means we can provide reliable and trustworthy solutions to our customers of all company sizes and across different sectors.”
Tariq Shaban, Senior Assessment Consultant, APAC at HireVue
Tariq Shaban, Senior Assessment Consultant, APAC at HireVue
“The ethical dilemmas surrounding AI largely stem from potential misuse or unintended consequences, which can negatively impact individuals and society while undermining trust in these systems. AI, if not carefully monitored, may unintentionally reinforce biases found in its training data, leading to biassed and unfair results in critical areas like hiring, lending, and law enforcement. These systems can also be complex and opaque which can be a significant issue in industries where understanding the decision-making process is crucial.
“However, the advantages of AI in identifying and addressing bias far outweigh any potential drawbacks that people might perceive. AI should be seen not as a source of bias, but as a tool to combat it, so long as it is accompanied by ongoing measurement and oversight.
“SMEs looking to ethically incorporate AI should seek to understand the problems they are attempting to solve to be sure that AI solutions are fit for purpose and can solve their challenges rather than add to them. In hiring for example, AI solutions should be proactive in identifying and addressing potential biases, acknowledging their substantial impact on individuals and communities. Clarity on AI algorithm development and functionality is crucial with an emphasis on the need to train algorithms on diverse, representative data to reduce bias.
“Finally, SMEs should strive for explainability in AI systems, ensuring that the logic behind decisions is understandable to end-users. This transparency not only builds trust but also facilitates regulatory compliance and enables users to identify potential errors in decision-making.”
Hemant Kashyap, Chief Product Officer at Clio
Hemant Kashyap, Chief Product Officer at Clio
“Recent advances in both predictive and generative AI technologies are opening up limitless innovative applications. However, these technologies can expose SMEs to a minefield of ethical considerations and challenges that can have business and reputational impact.
“First up, today’s SMEs often lack vast datasets making their AI solutions more susceptible to skewed perspectives, particularly in critical areas like hiring or customer service. This isn’t just a technical problem—it’s a brand risk.
“Similarly, most of the AI models are still in their first generation and are still being trained. This can lead the models to have hallucinations – a phenomenon where it perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate. Decisions being made based on these outputs could lead to disastrous consequences.
“Finally, data privacy and security take on a new urgency in the AI era. SMEs are not just guardians of data, they’re now its interpreters and have to comply with regulations. There’s a fine line between personalised services and invasive surveillance, and companies must navigate this with care.
“Ultimately, AI is not just about efficiency and profit, it’s about shaping the future of work and society.”
Liam Dermody, Director of Threat Analysis, ANZ at Darktrace
Liam Dermody, Director of Threat Analysis, ANZ at Darktrace
“AI has the potential to change the way we work and how businesses operate. In fact, this rapidly growing market was worth $7.9 billion in Australia in 2023, and much of that growth is being driven by SMEs looking to improve decision making, boost customer service and increase efficiency.
“No technology, however, is perfect, and that goes for AI. It’s still early days for some artificial intelligence solutions, with ChatGPT having launched just over a year ago. One of the ethical considerations SMEs must be attuned to is what are the implications for when AI gets it wrong. In particular, Generative AI – the sort used by ChatGPT – can give wrong responses, something known in the industry as ‘hallucinations.’ If the output is not properly verified before it is put into use, there could be substantial ethical consequences. For example, earlier this year a Victorian mayor became aware that ChatGPT had falsely claimed that he had spend time in jail when members of the public begun notifying him. This reputational damage has seen the mayor take civil action against ChatGPT’s founder OpenAI.
“SMEs also must be aware cybercriminals can use generative AI for cyberattack, particularly to craft phishing emails that sound legitimate and look believable. This puts the pressure on SMEs who must educate their employees about cybersecurity, and how to spot the activities of a hacker before they gain access to their networks using AI attacks.”
Andrii Bezruchko, CEO and founder at Newxel
Andrii Bezruchko, CEO and founder at Newxel
“In the swiftly evolving landscape of AI, which outpaces the regulations, SMEs should take proactive actions over possible ethical challenges:
Data privacy and security: AI-based language models trained on data sets that sometimes include personally identifiable information about individuals. This data can sometimes be elicited with a simple text prompt. Ensure the anonymization of sensitive information and stay proactively informed about evolving regulations.
Safeguarding intellectual property: AI tools are trained on massive image and text databases from multiple sources, the data’s source could be unknown. Protect its own IP and prevent the inadvertent breach of third-party copyright using pre-trained foundation models.
The opacity of AI algorithms: AI systems group facts together probabilistically, and consume tremendous volumes of data that could be inadequately governed, and of questionable origin. Provide clear explanations for decision-making and establish accountability for the outcomes.
Discrimination in communication: AI-generated content sent on behalf of the company, for example, could inadvertently contain offensive language or issue harmful guidance to employees. Do not exclude humans to ensure content meets the company’s ethical expectations and supports its brand values.”
Peter Philipp, ANZ General Manager at Neo4j
Peter Philipp, ANZ General Manager at Neo4j
“While artificial intelligence has been a game-changer for many industries, it is not without controversy. Generative AI in the form of tools like ChatGPT have several limitations, such as hallucinations and inconsistencies, and its accuracy and quality have shown to be untrustworthy, factually incorrect, biased or limited in scope for business use.
“There are countless examples where ChatGPT can appear to answer a question confidently and fluently even though the answer is totally untrue. SMEs should also consider ChatGPT-related legal ramifications for using inaccurate information or infringing on intellectual property.
“To overcome these issues, turn to the technology underpinning ChatGPT-like tools called Large Language Models – businesses can combine the best of Generative AI with other solutions, such as graph technology, to ensure improved safety, rigour and transparency.
Graph technology is well-suited for applications where relationships between data elements are as important as the data itself. As a result, graphs are increasingly being deployed with LLMs to make organisations more confident of AI predictions.
“LLM’s potential is powerful and profound, but its greatest opportunities will only be unleashed with responsible use. It’s important to counter this bias, both in how the models are defined and how the AI is trained and fed. Knowledge graphs can frame data better, solve challenges and unlock the outcomes that will provide invaluable benefits.”
Emma Bromet, Partner – Data & AI at Mantel Group
Emma Bromet, Partner – Data & AI at Mantel Group
“AI and Machine Learning undoubtedly unlock a whole new level of innovation for Australian organisations. However, from privacy to bias in AI modelling, there are a lot of ethical considerations to be mindful of before embarking on an AI project. When automated decision-making impacts people’s lives, either directly or indirectly, the potential risks and harms need to be reviewed and fully qualified before any proposed solution is agreed upon.
“Any organisation implementing algorithmic decision-making needs to have an objective ethical review process encompassing both quantitative and qualitative considerations. Monitoring of model performance against ethical metrics in production is crucial to understand change in performance over time, and should be embedded in the ML ops process. The operationalisation of ethics within data & analytics must be a collaborative process between the data/tech teams and the business teams.
“We would welcome a reconsideration of the legal frameworks, institutional arrangements and rules for ethics in AI in Australia but irrespective of legislation, organisations should be aiming for best practice and adopting a question – review – measure – improve approach to managing the performance and impact of their automated decision-making.”
Chanelle Le Roux, Owner/Digital Strategist at Ninki Content Marketing
Chanelle Le Roux, Owner/Digital Strategist at Ninki Content Marketing
“As a Marketing agency owner who has had to pivot because of the emergence of AI and its significant impact on copywriting in particular, I am all too familiar with the ethical challenges of AI replacing human work.
“A consideration that is often overlooked is economic disparity, the creation of a gap between those accessing and harnessing AI’s power, and those who don’t. More importantly, how will it affect our minds? And the minds of future generations? Will it make us lazy thinkers? Will we all end up with brains of cheese while the robots run the show?
“As a Marketing strategist, I acknowledge the wonders of AI but stress its limitations in understanding the complexity of brand and business, especially in tasks requiring empathy and creative problem-solving.
“If businesses are using AI in any way, I believe they should document and publish an AI policy that explains how they do and don’t use AI in the business. Consumers have a right to know and I believe it is unethical to not be transparent about the use of AI in an organisation.”
David Price, Group CEO ANZ at Employsure
David Price, Group CEO ANZ at Employsure
“AI can be a great business tool when used alongside people, rather than in replacement of, allowing for enhanced efficiency, reduction in admin, or even creative prompts.
“However, as its adoption increases, it presents ongoing challenges to SMEs, particularly as there’s no standard practice across the board for businesses to follow.
“It’s not surprising that small businesses are concerned or wary about using AI, given that it’s still fairly unknown. We conducted research recently on what SME’s concerns are with adopting AI and at the top of the list was privacy risk and increased margin of error.
“Regarding privacy risks such as data collection and surveillance, third-party companies may collect and sell data without the business’s prior knowledge or consent. AI is just as susceptible to data breaches and security exposure just as any other platform so this is a consideration for business owners when deciding to adopt it or not.
“As with any form of generative technology, there is also the probability of an increased margin of error which can present itself through the misinterpretation of data leading to inaccuracies, impacting crucial business decisions. This is where a human intervention is vital to the quality and accuracy of AI.
“Whether a business chooses to adopt AI in the near future, it must consider the impact it will have on the overall business framework as well as the bearing it will have on the existing workforce, and ensure clear policies are established for its use.”
Konstantin Klyagin, Founder of Redwerk
Konstantin Klyagin, Founder of Redwerk
“The initial euphoria over AI is giving way to a growing privacy problem with the data that flows to AI companies. These companies openly say they use user data to improve their systems, but there are no guarantees that it will be protected.
“Fearing that employees may leak trade secrets or consumer data, Apple, Amazon, Verizon, and other companies have banned the use of third-party AI apps at work. At the same time, all of these companies can afford to develop a generative AI tool of their own. In fact, Amazon already has its CodeWhisperer, and Apple has something in the works, too.
“Small and midsize businesses don’t have such a luxury, so they’ll need to clearly define the list of things they can and cannot do with the help of AI. They’ll also need to continuously educate and upskill their employees.
“As a software development agency, we’re responsible for preserving our clients’ data privacy and integrity. We have a corporate policy mandating a responsible use of AI on clients’ projects. For our internal needs, such as streamlining our content writing and editing processes, we don’t have any restrictions, experimenting with AI to gain the necessary skills and stay competitive.”
Tim Bradley, Head of Consulting and Advisory at Logicalis Australia
Tim Bradley, Head of Consulting and Advisory at Logicalis Australia
“The hype surrounding Generative AI is growing daily, as is the vast opportunities for dynamic businesses to leverage Gen AI. However, ethical concerns arise, particularly regarding the quality and diversity of supplied training datasets used to teach the AI. Without adequate diversity in the training datasets potential biases may be unintentionally introduced. There has been Instances of deficiencies in some AI services such as facial recognition, that have been documented due to the absence of diverse racial profiles in the source AI training sets.
“Privacy becomes a significant issue when AI gains access to personal and confidential data, with the risk of reconstructing and revealing private information. Robust safeguards are essential to prevent inadvertent exposure or unauthorised use of sensitive data during AI processes.
“These challenges are not insurmountable, and organisations can address them by proactively implementing and overseeing AI withing their business through the creation of an AI ethics committee. This committee plays a pivotal role in defining and regulating ethical dimensions, ensuring responsible and transparent AI practices. Establishing such a committee is a cornerstone for fostering ethical and accountable AI usage within the business realm and assist in effectively navigating the ethical landscape of AI.”
Robert K. Rowe, VP, Corporate AI, HID
Robert K. Rowe, VP, Corporate AI, HID
“While the improvements and availability of powerful machine learning algorithms allow security systems to discriminate between authorised and unauthorised users better, as well as provide ways to increase productivity – there are certainly ethical concerns surrounding the broad adoption of the technology.
“It is increasingly challenging to meet the evolving and divergent regulatory requirements of systems that incorporate advanced AI. As it also allows for greater connectivity between systems and scalability over data volumes, this leads to increasing privacy concerns. Currently most sophisticated AI systems are “black boxes” with very limited explainability or predictability of the AI system response in all scenarios. This can lead to unpredictable and problematic responses in certain cases.
“Businesses should focus on the positives of AI, while also being transparent about the potential negatives. Addressing the concerns proactively can help build trust and ensure that customers feel confident in their AI-enabled investments.”
Warren Schilpzand, Area Vice President of Australia and New Zealand at DataStax
Warren Schilpzand, Area Vice President of Australia and New Zealand at DataStax
“Many companies are embarking on an AI journey, with the idea of better serving customers, improving efficiency and relieving employees of mundane tasks in favour of creative, strategic work. But there are important factors organisations must address if their AI is going to be ethical.
“It’s common knowledge generative AI (GenAI) can make answers up or, as it’s known in the industry, ‘hallucinate.’ And AI responses are only as good as the data set they’re trained on, meaning if the data is biased, inaccurate or discriminatory, then the responses the AI generates will also feature those flaws.
“This is why at DataStax we encourage our customers to build ethical AI with a strong governance structure, and advise the entire business must be trained on the limitations of AI – that is, what is this model good at and what should be the boundaries on how and where it’s applied? Understanding these limitations will help prevent misuse down the line.
“We also use Retrieval Augmented Generation (RAG), which is a technique that enhances the capabilities of large language models (LLMs) by incorporating external knowledge sources and proprietary data into their generative process. Our customers use our Astra DB vector database and our RAGStack solution to deliver these external knowledge sources.”
Tracy Ford, Founder & HR Consultant at Concept HR Services
Tracy Ford, Founder & HR Consultant at Concept HR Services
“Although there has been an explosion of AI over the past year, many individuals are still in the process of comprehending this new technology and its practical applications and, as such, ethical considerations may be taking a back seat.
“There are many ethical challenges and considerations but here are three that are top of mind for me.
“Despite the efficiency AI tools bring, organisations must prioritise safeguarding information to ensure data security.
“Human expertise is essential to validate AI output. I think it may be tempting, for example, to use AI instead of engaging HR expertise but it is still crucial to seek expert advice, especially when generating critical documents like contracts, policies, and disciplinary letters.
“Another dilemma arises when SMEs delegate too much responsibility to AI in communication. While AI can generate content quickly, it may lack the unique voice and tone that characterise a company’s brand. Overreliance on AI-generated content may dilute the organisation’s identity.
“To address these challenges, SMEs should formulate comprehensive AI policies covering data privacy, responsible AI tool usage, and guidelines for interacting with AI-generated content.
“Educating employees on the ethical considerations surrounding AI promotes a culture of awareness and responsible usage within the organisation.”
Thomas Fu, Executive Director at Motor Culture Australia
Thomas Fu, Executive Director at Motor Culture Australia
“As a business we use AI across many aspects to optimise our output and improve efficiencies. In fact, we’ve found the use of AI highly effective in training sales staff and assisting with customer outreach. Despite this, we’re aware of the risks of over-reliance on AI. For starters, there is significant ambiguity surrounding the ethical use of AI, particularly in the creative fields such as content and image production. We have our own in-house creative team, including two photographers, which means we mostly use AI to improve business processes and save time on rudimentary tasks. Despite this, for many small businesses, with limited budgets and staff, it can be tempting to heavily rely on AI to fill business gaps. However, there are many issues with this. AI cannot replace human expertise and over reliance could lead to a critical skills and knowledge shortage. Likewise, significant data privacy and security issues exist with the use of AI. Businesses should refrain from inputting sensitive or confidential data into generative programs which share this information with third party sources. Unfortunately, AI algorithms also contain significant biases, given they are trained on preexisting data. Relying too heavily on AI could result in unfair outcomes for groups of customers or employees who have been overlooked due to data biases. Ultimately, taking a common sense approach with the use of AI, and researching any programs you use ahead of time, will help you avoid these issues as a business.”
Sadiq Iqbal, Security Engineering Manager For The Office of the CTO, Check Point Software Technologies
Sadiq Iqbal, Security Engineering Manager For The Office of the CTO, Check Point Software Technologies
“AI platforms are valuable tools with wide-ranging applications and as they become more advanced, they will increasingly become a core part of the security landscape.
“However, AI has both offensive and defensive applications and these have unfortunately captured the attention of cybercriminals. They have been quick to understand how it can be put to use to streamline their activities and make attacks more likely to succeed.
“As a result, the IT sector needs to face up to some key challenges. The first is the task of ensuring data privacy and protection. Measures need to be put in place to ensure that the data on which AI platforms such as ChatGPT is trained is not private or confidential. Steps also need to be taken to ensure that the models powering AI tools are transparent and free from bias. Any failure to do this could lead to outputs that favour one group of people over another.
“Regulators will also have to work to ensure that the vendors creating AI-powered platforms are held accountable and responsible for the way they operate and the outputs they create. This liability also has to be enforceable regardless of where in the world the vendor is based.”
Shaun Leisegang, General Manager – Automation, Data and AI at Tecala
Shaun Leisegang, General Manager – Automation, Data and AI at Tecala
“Generative AI is revolutionising the business landscape, offering profound opportunities for innovation and efficiency. Small and medium-sized neterprises (SMEs), in particular, are leveraging these advanced technologies to supercharge their business operations, personalise customer experiences, and automate tedious, mundane, and repetitive tasks. However, this technological leap brings with it a spectrum of ethical challenges and considerations that must be carefully managed.
“When SMEs integrate AI into their operations, they must adhere to ethical values such as equality, safety, privacy, openness, comprehension, and accountability. Equality demands that AI systems treat all users fairly and make unbiased decisions. In terms of safety, SMEs are responsible for ensuring the reliability and security of AI applications. Protecting user privacy is paramount, as AI systems often handle sensitive data. Openness entails making AI understandable and accessible to diverse user groups. Comprehension requires SMEs to explain AI decisions and processes to stakeholders. Finally, accountability involves assigning clear responsibility for AI-driven outcomes.
“Navigating these principles is essential for SMEs to responsibly exploit the capabilities of AI and maintain trust with their employees and customers alike.”
David Fischl, Legal Digital Transformation Lead Partner at Hicksons Lawyers
David Fischl, Legal Digital Transformation Lead Partner at Hicksons Lawyers
“The emergence of AI has given SMEs a golden ticket by democratising access to information and resources. By effectively utilising AI, SMEs can achieve more with less. The challenge for SMEs is to embrace AI whilst also being aware of its risks and ethical issues.
“As a starting point, SMEs should consider the Artificial Intelligence (AI) Ethics Framework and AI Ethics Principles set out by the Australian Government. The framework is a voluntary guide which will assist SMEs to responsibly design, develop and implement AI. It provides eight principles that SMEs should be aware of when using AI. These include:
Human, societal and environmental wellbeing
Human-centred values
Fairness
Privacy protection and security
Reliability and safety
Transparency and explainability
Contestability
Accountability
“SMEs that apply and commit to the Principles will place themselves in the best position to ensure that they are responsibly and ethically using AI.”
Chris Ellis, Director of Pre-Sales at Nintex
Chris Ellis, Director of Pre-Sales at Nintex
“In the ever-expanding realm of artificial intelligence (AI) across diverse industries, experts with specialised knowledge play a pivotal role in guaranteeing its conscientious and ethical integration:
Transparency: SMEs need to ensure that AI models maintain transparency, particularly with the hope of ensuring mainstream adoption within an organisation. Without it, evaluating the fairness, bias, and potential risks of AI systems becomes challenging.
Bias: AI models are trained on data, and the quality of that data significantly influences the result. SMEs must assess the data used in training AI models, ensuring it is devoid of bias, and is fair and neutral. Biased data can result in discriminatory outcomes.
Privacy and Security: AI systems often deal with sensitive personal data, giving rise to concerns about privacy and security. SMEs must implement robust measures to protect this information, preventing unauthorised access or misuse.
Human in-the-loop: Maintaining human control over AI systems is crucial. SMEs must establish clear governance frameworks that identify the roles and responsibilities of humans and AI in any decision-making process, as well as ensure an ‘out’ for end users to engage with a human interpreter.”
Discover Let’s Talk Business Topics
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Dynamic Business – https://dynamicbusiness.com/leadership-2/lets-talk-business/how-can-your-sme-navigate-the-ethical-challenges-inherent-in-ai-adoption.html