#Diversity in high-tech fields remains critically low. The Equal Employment Opportunity Commission (EEOC) recently reported that #Black and #Latino professionals are underrepresented in high-tech roles, especially in leadership. These numbers highlight ongoing structural barriers in hiring, promotion and retention. This gap is a missed opportunity to tap into a wealth of diverse talent and perspectives essential to the future of tech. However, addressing and thoroughly fixing these challenges will require time, consistent effort and a long-term commitment to systemic change. Companies can support the progression of representation in tech by investing in training, mentorship and internship opportunities that open doors for people who were historically shut out. Programs like internXL, a platform that is committed to increasing diversity and inclusion in the internship hiring process for top companies, are making a significant impact. Similarly, the expansion of STEM education at institutions like Cornell University is helping to connect talented young people from underrepresented communities with opportunities for high-tech careers. When we work together to remove these barriers, we’re fostering a more inclusive workforce and strengthening innovation, problem-solving and leadership in the industry. Let’s build a tech future that reflects the diversity of our society. https://bit.ly/3UNtOCh
Tech-Driven Workforce Diversity
Explore top LinkedIn content from expert professionals.
-
-
BREAKING! The FDA just released this draft guidance, titled Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, that aims to provide industry and FDA staff with a Total Product Life Cycle (TPLC) approach for developing, validating, and maintaining AI-enabled medical devices. The guidance is important even in its draft stage in providing more detailed, AI-specific instructions on what regulators expect in marketing submissions; and how developers can control AI bias. What’s new in it? 1) It requests clear explanations of how and why AI is used within the device. 2) It requires sponsors to provide adequate instructions, warnings, and limitations so that users understand the model’s outputs and scope (e.g., whether further tests or clinical judgment are needed). 3) Encourages sponsors to follow standard risk-management procedures; and stresses that misunderstanding or incorrect interpretation of the AI’s output is a major risk factor. 4) Recommends analyzing performance across subgroups to detect potential AI bias (e.g., different performance in underrepresented demographics). 5) Recommends robust testing (e.g., sensitivity, specificity, AUC, PPV/NPV) on datasets that match the intended clinical conditions. 6) Recognizes that AI performance may drift (e.g., as clinical practice changes), therefore sponsors are advised to maintain ongoing monitoring, identify performance deterioration, and enact timely mitigations. 7) Discusses AI-specific security threats (e.g., data poisoning, model inversion/stealing, adversarial inputs) and encourages sponsors to adopt threat modeling and testing (fuzz testing, penetration testing). 8) And proposed for public-facing FDA summaries (e.g., 510(k) Summaries, De Novo decision summaries) to foster user trust and better understanding of the model’s capabilities and limits.
-
European Accessibility Act (EAA): Why WCAG AA Isn’t Enough (https://lnkd.in/eHXE3YFK), a guide on why meeting WCAG standards alone doesn’t mean that digital products are compliant with EAA, and what EAA covers beyond the usual suspects. Put together by fine folks at Stark. WCAG guidelines focus on web content accessibility — color contrast, headings, navigation order, focus states, etc. It’s necessary, but not sufficient. The EAA adds requirements that go beyond the UI layer: 1. Interoperability with assistive technology 2. Third-party vendors, tools, services 3. Accessible support and cancellation flows 4. Conformance statements and technical docs 5. End-to-end usability (e.g. across devices, platforms) 6. Full-service accessibility (before, during, after) 7. Information delivery at every stage of use (e.g. emails). Frankly, it’s very difficult to imagine that an end-to-end accessible experience that covers the points above would emerge with a few accessibility-focused sprints running a few times a year. Yet in many organizations, accessibility initiatives are one-off efforts. As the time comes, there is a big effort to make digital products and services compliant, document these efforts and leave it be — until the next round of compliance work. Accessibility is treated as necessary work that must be done every now and again, rather than an ongoing investment and opportunity to reach wider audience. I love the point that organizations need to operationalize accessibility like they govern privacy and security. It requires people who enable and establish accessibility efforts, track their success and inform product development. It’s easier to achieve when it’s an ongoing effort, and when it involves a diverse group of users in research, design and testing. Accessibility never happens by accident. There must be a deliberate effort to make products and services more accessible. It doesn’t have to be challenging if it’s considered early. No digital product is neutral. Accessibility is a deliberate decision, and a commitment. Not only does it help everyone; it also shows what a company believes in and values. And once you do have a commitment, it will be so much easier to retain accessibility, rather than adding it last minute as a crutch — because that’s where it’s way too late to do it right, and way too expensive to make it well. And yet again, a kind word of support to everyone speaking for and supporting accessibility work, often with a lot of resistance, with very little budget and with a lot of care and persistence — to help people who often need help the most, and add benefits for everybody else. 👏🏼👏🏽👏🏾 Useful resources: The New European Accessibility Act (EAA), And What It Means For You https://lnkd.in/eH-5Q3Mr #ux #WebAccessibility
-
As AI weaves itself into the fabric of our lives, we have a tendency to assume that all of us want the same things from AI. A recent study from Stanford HAI reveals that our cultural background significantly influences our desires and expectations from AI technologies. European Americans, deeply rooted in an independent cultural model, tend to seek control over AI. They want systems that empower individual autonomy and decision-making. In contrast, Chinese participants, influenced by an interdependent cultural model, favour a connection with AI, valuing harmony and collective well-being over individual control. Interestingly, African Americans navigate both these cultural models, reflecting a nuanced balance between control and connection in their AI preferences. The importance of embracing cultural diversity in AI development cannot be understated. As we build technologies that are increasingly global, understanding and integrating these diverse cultural perspectives is essential. The AI we create today will shape the world of tomorrow, and ensuring that it resonates with the values and needs of a global population is the key to its success. When designing technology solutions, we must think beyond our immediate cultural contexts and strive to create systems that are inclusive, adaptable, and culturally aware. If OpenAI wants to benefit humanity, then that needs to be humanity with all our different world views. The key takeaways from the study can apply to all kinds of product development: 1. Cultural Awareness: recognise that preferences vary across cultures, and these differences should inform design and implementation strategies. 2. Inclusive Design: incorporate diverse perspectives from the outset to create products that resonate globally. 3. Global Leadership: lead with an understanding that what works in one cultural context might not in another—adaptability is key. By embedding these principles into our product development efforts, we can ensure that the technology and products we develop are culturally attuned to the needs of a diverse world. I would love to see deeper analysis of this cultural lens as it should inform the way we work with technology for good. There is always a danger that as we seek to break one set of biases, we introduce our own. How do you think leaders should adapt their AI approaches or precut development on the basis of this research? #AI #product #research #techforgood #responsibleAI Enjoy this? ♻️ Repost it to your network and follow me Holly Joint 🙌🏻 I write about navigating a tech-driven future: how it impacts strategy, leadership, culture and women 🙌🏻 All views are my own.
-
🌍 UNESCO’s Pillars Framework for Digital Transformation in Education offers a roadmap for leaders, educators, and tech partners to work together and bridge the digital divide. This framework is about more than just tech—it’s about supporting communities and keeping education a public good. 💡 When implementing EdTech, policymakers should pay special attention to these critical aspects to ensure that technology meaningfully enhances education without introducing unintended issues: 🚸1. Equity and Access Policymakers need to prioritize closing the digital divide by providing affordable internet, reliable devices, and offline options where connectivity is limited. Without equitable access, EdTech can worsen existing educational inequalities. 💻2. Data Privacy and Security Implementing strong data privacy laws and secure platforms is essential to build trust. Policymakers must ensure compliance with data protection standards and implement safeguards against data breaches, especially in systems that involve sensitive information. 🚌3. Pedagogical Alignment and Quality of Content Digital tools and content should be high-quality, curriculum-aligned, and support real learning needs. Policymakers should involve educators in selecting and shaping EdTech tools that align with proven pedagogical practices. 🌍4. Sustainable Funding and Cost Management To avoid financial strain, policymakers should develop sustainable, long-term funding models and evaluate the total cost of ownership, including infrastructure, updates, and training. Balancing costs with impact is key to sustaining EdTech programs. 🦺5. Capacity Building and Professional Development Training is essential for teachers to integrate EdTech into their teaching practices confidently. Policymakers need to provide robust, ongoing professional development and peer-support systems, so educators feel empowered rather than overwhelmed by new tools. 👓 6. Monitoring, Evaluation, and Continuous Improvement Policymakers should establish monitoring and evaluation processes to track progress and understand what works. This includes using data to refine strategies, ensure goals are met, and avoid wasted resources on ineffective solutions. 🧑🚒 7. Cultural and Social Adaptation Cultural sensitivity is crucial, especially in communities less familiar with digital learning. Policymakers should promote a growth mindset and address resistance through community engagement and awareness campaigns that highlight the educational value of EdTech. 🥸 8. Environmental Sustainability Policymakers should integrate green practices, like using energy-efficient devices and recycling programs, to reduce EdTech’s carbon footprint. Sustainable practices can also help keep costs manageable over time. 🔥Download: UNESCO. (2024). Six pillars for the digital transformation of education. UNESCO. https://lnkd.in/eYgr922n #DigitalTransformation #EducationInnovation #GlobalEducation
-
As GenAI becomes more ubiquitous, research alarmingly shows that women are using these tools at lower rates than men across nearly all regions, sectors, and occupations. A recent paper from researchers at Harvard Business School, Berkeley, and Stanford synthesizes data from 18 studies covering more than 140k individuals worldwide. Their findings: • Women are approximately 22% less likely than men to use GenAI tools • Even when controlling for occupation, age, field of study, and location, the gender gap remains • Web traffic analysis shows women represent only 42% of ChatGPT users and 31% of Claude users Factors Contributing the to Gap: - Lack of AI Literacy: Multiple studies showed women reporting significantly lower familiarity with and knowledge about generative AI tools as the largest gender gap driver. - Lack of Training & Confidence: Women have lower confidence in their ability to effectively use AI tools and more likely to report needing training before they can benefit from generative AI. - Ethical Concerns & Fears of Judgement: Women are more likely to perceive AI usage as unethical or equivalent to cheating, particularly in educational or assignment contexts. They’re also more concerned about being judged unfairly for using these tools. The Potential Impacts: - Widening Pay & Opportunity Gap: Considerably lower AI adoption by women creates further risk of them falling behind their male counterparts, ultimately widening the gender gap in pay and job opportunities. - Self-Reinforcing Bias: AI systems trained primarily on male-generated data may evolve to serve women's needs poorly, creating a feedback loop that widens existing gender disparities in technology development and adoption. As educators and AI literacy advocates, we face an urgent responsibility to close this gap and simply improving access is not enough. We need targeted AI literacy training programs, organizations committed to developing more ethical GenAI, and safe and supportive communities like our Women in AI + Education to help bridge this expanding digital divide. Link to the full study in the comments. And a link also to learn more or join our Women in AI + Education Community. AI for Education #Equity #GenAI #Ailiteracy #womeninAI
-
"In 2024, we’ll continue to see attention-grabbing headlines out of Silicon Valley, but the real AI story is happening in communities at the frontlines of vulnerability, particularly Indigenous and underrepresented groups, who are building climate solutions, democratizing access to the tech economy, and improving healthcare accessibility. There are so many unsung heroes at the intersection of AI and global challenges, and we need to champion their efforts and successes – not for notoriety, but so that those solutions can be tailored and scaled for vast progress across the world." Building AI that solves real, global human problems could help us create a society that values purpose, people, and planet over profit. We need to invest in communities, tools, and technologies that center the lived experience of people and create a path to a more just future. In today’s Forbes column, Shannon Farley of Fast Forward shares insights from leaders driving philanthropic investment in AI, as well as some of the most effective nonprofits leveraging technology for positive social outcomes at scale. Delighted to be in common cause with brilliant doers - Brigitte Hoyer Gosselink, Anu Malipatil, Stephanie Lo, Reid Hoffman and Suzanne DiBianca! https://lnkd.in/eKm9G9Zn The Patrick J. McGovern Foundation Climate TRACE, WattTime.org, Gavin McCormick
-
Let's start Disability Pride Month 💜 with, "Why the Disability/Neurodivergent Community Advocates for Work From Home or Flexible Work" 1. Addressing Inaccessibility: Traditional office environments often lack the necessary accommodations for individuals with disabilities. Remote work removes these barriers, providing an accessible and comfortable workspace tailored to individual needs. 2. Managing Energy Levels: Many people with disabilities experience limited energy levels due to chronic conditions. Flexible work allows them to manage their energy more effectively, reducing the risk of burnout and enhancing overall productivity. 3. Economic Benefits: Remote work eliminates the costs associated with commuting and the need for expensive adaptive equipment in the workplace. This financial relief can be significant, allowing individuals to invest in health, education, and personal growth. 4. Time for Self-Care and Family: Flexible work schedules provide individuals with disabilities more time for essential self-care routines and to spend quality time with their families. This balance is crucial for mental and physical well-being. 5. Environmental Sustainability: Reduced commuting contributes to lower carbon emissions, making remote work an environmentally sustainable option. This aligns with broader societal goals of reducing our carbon footprint. 6. Enhanced Productivity: Working from home allows for a personalized environment that can minimize distractions and increase focus, leading to higher productivity levels. 7. Improved Mental Health: The flexibility to create a comfortable and supportive work environment can significantly reduce stress and anxiety, contributing to better mental health. 8. Greater Inclusion and Equity: By adopting flexible work models, employers can ensure that their workplaces are inclusive and equitable, providing equal opportunities for individuals with disabilities. What’s the point behind this? The insistence on traditional office setups often overlooks the unique needs of the disability community. Flexible work is not merely a convenience; it’s a necessity for creating an inclusive, equitable, and productive workforce. Why should location matter if employees can deliver high-quality work remotely? It’s time to rethink outdated workplace norms and embrace flexibility as a standard practice. In an ideal world, inclusivity and accessibility are at the core of corporate values. ID: Screenshot of a Twitter post by Puneet Singhal (@puneetsiinghal22) with the tweet reading, "Why the Disability/Neurodivergent Community Advocates for Work From Home or Flexible Work." #DisabilityPrideMonth #WorkFromHome #WeAreBillionStrong #SDGs #AXSChat #Accessibility #DisabilityInclusion #WFH
-
Digital health promises transformation but it also raises deep ethical questions. A new perspective article argues that the principle of justice must guide how we design and deploy digital health. The authors remind us that equality, equity and justice are not the same. Equality gives everyone the same resources, equity adapts resources to individual needs, and justice goes further by addressing structural barriers that exclude people in the first place. Key insights from the paper: 1. Digital determinants of health matter: Access to connectivity, digital literacy, algorithmic bias, and trust are as important as traditional social determinants of health. 2. Justice requires more than access: Providing devices or portals is not enough. Structural issues like inaccessible design, digital deserts, and biased algorithms can perpetuate exclusion unless actively corrected. 3. Vulnerable groups must be included: Older adults, people with disabilities, language minorities and those with low digital literacy are among the heaviest users of health systems yet the most at risk of exclusion. Co-creation and participatory design are essential. 4. Policy and practice must integrate ethics: Justice in digital health requires equity assessments, digital facilitators to support patients, literacy programs, and collaboration across sectors such as health, education and technology. Digital health is not just a technical or clinical transformation, it is an ethical one. Justice must be the guiding value to ensure that digital innovation closes gaps rather than widening them. #DigitalHealth #HealthEquity #Bioethics #PatientEngagement #HealthInnovation #JusticeInHealth #HealthIT #DigitalInclusion #Techquity #HealthcareTransformation https://lnkd.in/d6TxRU2F
-
Technology can truly enable social progress — nowhere more so than in Asia-Pacific. My colleague Neeraj Aggarwal, who leads BCG in the region, recently shared a breathtaking list of tech-driven social advancements. 💵 Financial inclusion: India’s payment system, #UPI, now processes around $1 trillion in transactions. As Neeraj notes, even street-side vendors now say, ‘UPI, please’. Similar transformations are underway in Indonesia, the Philippines, and Singapore. In China, Alipay handles 100 times more transactions than PayPal. 🏞️ Climate and air pollution: Asia is home to many of the world’s most polluted cities and is responsible for around half the planet’s carbon footprint. In response, many companies across the region are investing at scale in solar and battery technologies. China, in particular, has driven down the cost of solar panels by 95% over the past two decades. 🏥 Healthcare: With 60% of the world’s population, Asia carries more than half the burden of diseases like diabetes and cancer. One of the key challenges is ensuring continuity of care — from small clinics to large hospitals. Many countries have struggled to implement this, but India has taken a big leap forward by digitising over 400 million health records. Neeraj, it’s inspiring to see the progress Asia is making. Ideas are flowing from East to West, and I’m looking forward to what comes next. Watch Neeraj's TED Talk here: https://on.bcg.com/4l5HDXj