I focused my thesis study at SCAD on integration of ethical principles in the design of deep-learning AI systems to build trust by applying the perspective and practice of design management.
AI Ethics on the Back Burner:
While AI is becoming more ubiquitous and powerful at a blazing speed, the hidden implications of its mass adoption are concerning. Currently the creators of AI systems are not incentivized enough to factor in the risks and integrate ethical perspectives early on.
Collegial and Paced Effort:
Through extensive research and collaboration with industry experts, I developed & validated three conceptual models for implementing AI ethics across LLMs, AI products, users, tech companies, and research organizations. I then developed a framework that embeds ethics within deep-learning AI systems at an early stage, and empowers organizations to create trust in the use of these systems.
After the disruptive release of ChatGPT by OpenAI, things have not been the same. Advanced algorithms are catering to many more interesting use cases.
However, the revolutionary convenience that these systems bring to our lives is accompanied by a dark cloud of incidents and controversies. Major potential risks include perpetuation of structural discrimination, replication of existing biases, infringement of personal privacy, and making unpredictable decisions. Regulations also fail to match up the pace of innovation.
As I was catching up on an array of impressive AI tools, I kept wondering:
Are we ready for what's coming next?
Do we even know what's coming next?
As the solo researcher / design manager on this study, I led the following activities:
1 - framing the problem
2 - carrying out research
3 - analyzing the findings
4 - market benchmarking
5 - mapping the ecosystem
6 - defining the design criteria
7 - building a concept catalog
8 - validating the ideas
9 - prototyping to market
10 - devising an implementation plan
Starting from the ground up, I took charge of:
1 - mapping user context and needs
2 - sharing domain knowledge within the team
3 - syncing with tech architects to design user flows
4 - translating research into dashboard concepts
5 - handing off wireframes to visual designers
6 - infusing business goals into UX iterations
7 - collaborating with business analysts to develop user stories
8 - storyboarding opportunities of differentiation
9 - driving user testing/feedback sessions
10 - pitching the UX impact to the CFO executives
I employed the principles of holistic design thinking process with a stronger emphasis on research.
Eighty-five percent of consumers say that it is important for organizations to factor in ethics as they deploy AI to tackle society’s problems.
Motive: Identifying key players and their relative stake in the system
Medium: Preliminary study
Mapping: Stakeholder map
The primary targets for this project are US-based tech leaders, managers and researchers building generative AI systems.
The secondary targets are AI designers, developers, data scientists as well as tech influencers (such as AI experts).
Sketching a map of Entities, Relationships, Attributes and Flows helped visualize the relationships and interactions between different stakeholders, and revealed the dynamics of power and influence.
Turns out, users are not explicitly aware and/or concerned about the repercussions in the use of AI systems. In today's context, non-users seem to be more affected than users.
I conducted semi-structured interviews to examine how the practices of the participants align with AI ethics principles.
Each participant represented a cross-section of experience and responsibility. It was a fine mix of thinkers (strategy) and doers (creation).
There was a sharp contrast in the challenges faced by the users of AI systems, as against the tech companies developing them.
Organizations had a variety of internal as well as external reasons that led to them deprioritizing ethics in the bigger scheme of things.
Early inputs from the stakeholders confirmed my belief that designers & researchers can play a pivotal role in developing ethically sound AI systems.
From the data I gathered in my interactions, it appeared that within a company, different teams have an assumed mindset when it comes to responsibility of AI ethics. For instance, a designer from a big tech company presumed that ensuring AI ethics is primarily the job of the legal teams.
In order to move the needle, it is important to understand and map out the system boundaries. Within this radar, stakeholders such as researchers, policymakers, and industry experts can collaborate to develop ethical guidelines, robust evaluation frameworks, and regulatory policies that promote accountable use of generative AI, ensuring their safe and beneficial deployment in society.
To visualize how policymakers and stakeholders to make decisions in foming regulations and taking initiatives, an innovation ecosystem map served as a valuable tool.
The map was then abstracted to only focus on the flows and relationships among the entities. This way, I could trace two underlying systems - one within the boundaries of tech orgs, and the other outside of them.
I expanded upon the ecosystem map by labelling various measures already being taken at the level of each stakeholder, so that my ideas do not turn out to be redundant.
This exercise exposed an important insight:
All the heavylifting of AI ethics is currently on the shoulder of the tech developers alone. May be there is an opportunity to balance the load within the system.
While there are plenty guidelines for AI Ethics, I was on the lookout for those specifically targeted towards designers and researchers.
The ones in the top right quadrant was the ideal spot for this study to gain a perspective of similar market players and solutions.
For competitor analysis, I deployed blue ocean canvasing to benchmark ideal attribute and highlight potential opportunity areas.
The untapped growth areas exposed themselves, using which I crafted the trajectory of my design innovation.
At the beginning of this project, my opportunity statement focused onstrategies specifically for Designers and Researchers of deep-learning AI to embed ethics early on in the development. But after analyzing the data from primary and secondary research, I learned that even if designers/researchers are in a position to make ethically sound decisions, the change needs to happen in a broader, cohesive and continuous manner.
To propose a cohesive intervention which equips the companies leveraging generative AI to integrate and propagate ethics at all stages of development, thereby building transparency and sustainable trust among all AI users.
There are handy guides and toolkits for designers and researchers to practice AI ethics. But in times like this, the pressing need is for AI makers to have a tailored, reliable and collaborative framework for their specific use of generative AI technologies.
Taking cue from all this synthesis and the desired attributes of the ethical AI system, I listed out the criteria against which I would brainstorm conceptual models.
Because it was hard to meet all of them in one shot, I categorized them into:
must-have, should have and nice to have.
In the world of AI, it is crucial to segregate utility from experimentation or demo purposes. The usage of large language models by product companies can be likened to Lego building blocks or making your own salad. You can subscribe to and acquire the specific pieces you need to create something unique.
This approach would ensure a more controlled, targeted and conscientious use of AI resources, where companies collect nuanced data from users who genuinely require the technology.
Additionally, as a mechanism to prioritize ethical practices, companies may lose access to these subscriptions in case they don't comply with ethical standards of deploying generative AI.
This shift also allows for more focused training of models and ensures that access to AI is not wide open for random purposes. As a result, many interesting use cases can emerge, encouraging companies to collaborate rather than engage in competitive battles. Ultimately, this collaboration serves to strengthen the algorithms powering AI advancements.
The second approach embraces a combination of enhanced product transparency with user discretion and control.
To make interactions transparent, full disclosure about data collection is crucial (what, how, why, for how long). Just like the symbols used to identify vegetarian from non-vegetarian food, NFT watermarking can be utilized to identify:
(a) if AI was used to generate an output,
(b) to what extent, and
(c) if the AI model is in experimental stage.
Providing decision maps can help users understand the rationale behind the outcomes. Additionally, AI tools can offer three outputs for users to choose from, depending on varying degrees of accuracy, fairness, personalization, privacy, and traceability.
Another way to boost trust is to disclose the reciprocal value to users for their data being shared, and offer creative ways to share data (e.g., in exchange for money or other benefits). Conducting Reverse Wizard of Oz and Bug Bounty trials can be fun ways for users to participate and share feedback.
At the other end, users have stronger control in terms of consent, opting-out, purging the data collected, reporting or flagging any issues, recording if the output surprised them, and engaging in community forums to share experiences.
Companies leveraging the generative AI wave have various blockers in prioritizing and integrating ethical practices. These include arranging necessary resources, developing their own ethics models, aligning the organization on ethical standards, modifying data collection & storage practices, and ensuring that their IP (secret sauce) doesn't get leaked.
Current landscape of generative AI calls for a participatory model, leveraging the strength that multiple entities bring, to accurately establish, assess and measure ethical considerations at all stages of development.
By recognizing the pivotal role of all stakeholders, we can achieve AI systems that are not only technologically advanced but also ethically sound, ultimately fostering trust, accountability, and positive societal impact.
The concept of symbiotic framework sits rightly positioned at the periphery of the ecosystem, between the micro and macro levels. Making changes at the meso level offers the advantages of broader impact, systemic change, improved efficiency and coordination, leveraging collective resources, scalability, and facilitating individual-level change. It provides an opportunity to create lasting and meaningful improvements within organizations and communities.
A quick Impact vs Time matrix helped me identify which of the three concepts would be the most fruitful to implement in the short vs long term.
Presented below is a high-level symbiotic model that involves key stakeholders and calls out their roles, responsibilities and mutual interactions.
3 concepts | 2 participants | Virtual meet | Qualitative feedback
The participants analyzed each concept in depth and came up with benefits and challenges of each one of them. These inputs further guided me to improve them and propose an implementation plan.
Based on time taken for each concept to be implemented, perceived benefits of the one already in execution and the potential impact of scale, I identified the actionable next steps.
It was time to tie the loose ends and clearly demarcate the value offered by the symbiotic framework for generative AI ethics.
Along side system modeling, I did a little branding exercise to communicate how some of these ideas would work in action.
The symbiotic framework approach invites a strong partnership between companies and research institutes, not-for-profit organizations and auditing agencies.
The collaboration allows companies to focus on innovating, by outsourcing the nitty gritties of AI ethics integration to reliable experts and neutral parties.
This way, companies are incentivized to save money, effort and time, preserve their IP, receive customized solutions and gain user trust.
Here is a tentative plot of the three concepts on a timeline that is in absolute years but also relative to the other concepts.
One of our core responsibilities as designers is to understand the ethical impact of what we create. Designing for users' trust is a valuable skill, regardless of the problem being solved. I saw a great opportunity to figure how designers and researchers can tackle ethics during (and not after) the developmental stages of Generative AI. This experience has equipped me with a strong sense of commitment to design products that prioritize user needs and well-being.
More Projects
I love making connections, so feel free to reach out and say hello!
Need a systems thinker who can drive business outcomes?
Or have thoughts to share on my work?
Either way, I love making connections. Let's get talking!
I love making connections, so feel free to reach out and say hello!
Need a systems thinker who can drive business outcomes?
Or have thoughts to share on my work?
Either way, I love making connections.
Feel free to connect via LinkedIn, email me, or schedule a time on my calendar.
Let's get talking!