<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by STS Software GmbH on Medium]]></title>
        <description><![CDATA[Stories by STS Software GmbH on Medium]]></description>
        <link>https://medium.com/@stssoftwaregmbh?source=rss-902ba6c43c66------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 05 Apr 2026 18:14:28 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@stssoftwaregmbh/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[This AI Research from Google Explains How They Trained a DIDACT Machine Learning ML Model to…]]></title>
            <link>https://medium.com/@stssoftwaregmbh/this-ai-research-from-google-explains-how-they-trained-a-didact-machine-learning-ml-model-to-7180a196d200?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/7180a196d200</guid>
            <category><![CDATA[ai-research]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Fri, 26 Apr 2024 10:00:31 GMT</pubDate>
            <atom:updated>2024-04-27T05:26:59.923Z</atom:updated>
            <content:encoded><![CDATA[<h3>This AI Research from Google Explains How They Trained a DIDACT Machine Learning ML Model to Predict Code Build Fixes</h3><p>Softwares are developed through a series of iterative steps, including editing, unit testing, fixing build errors, and code reviews until the product is good enough to be added to a repository. GoogleAI researchers <a href="https://research.google/blog/large-sequence-models-for-software-development-activities/"><strong>introduced DIDACT (​​Dynamic Integrated Developer ACTivity)</strong></a> to enhance developers’ experience of fixing build errors, focusing on Java development. Build errors are not only time-consuming but can also be complex, involving issues like generics or cryptic error messages. The frustration of developers in resolving such errors leads them to propose a machine learning (ML) solution to automate the process of identifying and fixing build errors.</p><p>Currently, developers spend significant time debugging build errors, ranging from simple typos to complex issues like generics or template errors. <a href="https://research.google/blog/safely-repairing-broken-builds-with-ml/"><strong>DIDACT ML resolves this issue by leveraging ML models trained on historical data of developers’ code changes and build logs</strong></a>. The key idea is the use of resolution sessions (chronological sequences capturing the evolution of code from the occurrence of a build error to its resolution). DIDACT ML can predict patches to fix build errors based on the code state at the time of the error and the subsequent fix. These fixes are then suggested to developers in real-time within their Integrated Development Environment (IDE), allowing for immediate action.</p><p>The DIDACT ML model is trained on a comprehensive dataset of resolution sessions, encompassing various types of build errors and their corresponding fixes. At serving time, the model takes as input the current code state and the build errors encountered, then generates a patch with a confidence score as a suggested fix. Post-processing steps such as auto-formatting and heuristic filters are applied to ensure the quality and safety of the suggested fixes. The experiments suggest a statistically significant productivity improvement, including a reduced active coding time per change-list, shepherding time per change-list, and an increase in change-list throughput. The study also finds no observable increase in safety risks or bugs when ML-generated fixes are applied, demonstrating the effectiveness and safety of the proposed approach.</p><p>In conclusion, the paper presents a compelling solution to the problem of improving developers’ experience in fixing build errors through the use of ML-powered automated repair. By leveraging historical data and real-time suggestions within the IDE, developers can more efficiently address build failures, leading to increased productivity and developer satisfaction. Overall, the approach helps reduce developer toil and frees up time for more creative problem-solving tasks in software development.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/0*UXEKcxldv5z9ETj8.png" /></figure><p><em>Originally published at </em><a href="https://www.marktechpost.com/2024/04/26/this-ai-research-from-google-explains-how-they-trained-a-didact-machine-learning-ml-model-to-predict-code-build-fixes/?fbclid=IwZXh0bgNhZW0CMTAAAR2ks8UCecYZrJJ-p-WRB3fsG1retGEDu9mMXlnsDNa4m3qTxOsPDfxIYwc_aem_ATiLfWRIQQoxo1wirSxP14fycLWnOHovC8jvxCmSW2MZixZw8M-rrhGyFTV0rD1B96YFrbreW3uAVQXBAvDw5LIR"><em>https://www.marktechpost.com</em></a><em> on April 26, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7180a196d200" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What is AI dogfight? Know how AI and human pilot got engaged in aerial combat]]></title>
            <link>https://medium.com/@stssoftwaregmbh/what-is-ai-dogfight-know-how-ai-and-human-pilot-got-engaged-in-aerial-combat-54ae351f5327?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/54ae351f5327</guid>
            <category><![CDATA[chatbots]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Sun, 21 Apr 2024 08:24:42 GMT</pubDate>
            <atom:updated>2024-04-21T08:24:42.153Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/299/0*tZV_ray_zUxCRt0f.jpg" /></figure><p><a href="https://economictimes.indiatimes.com/topic/us-air-force">US Air Force</a> has conducted the world’s first known combat between a <a href="https://economictimes.indiatimes.com/topic/human-pilot">human pilot</a> and a fighter jet controlled by AI. The pair of <a href="https://economictimes.indiatimes.com/topic/f-16-fighter-jets">F-16 fighter jets</a> took off from Edwards Air Force Base in California, flew at speeds of up to 1,200mph and came close to 600 meters during <a href="https://economictimes.indiatimes.com/topic/aerial-combat">aerial combat</a>. In the first-ever dogfight of this kind, one fighter was manned, while the other jet was a modified version of the F-16, called the X-62A, or VISTA, Variable In-flight Simulator Test Aircraft.</p><h3>Dogfight in California</h3><p>According to Fox Business, the dogfight between AI and human pilot was carried out as part of the Air Combat Evolution (ACE) program, launched by the Defense Advanced Research Projects Agency (DARPA) in 2019. The Air Force conducted the AI dogfights at Edwards Air Force Base in California, the home base of the 412th Test Wing.</p><h3>‘Machine Learning’</h3><p>DARPA said that the ‘machine learning’ has been tested in simulators on the ground for years. In an earlier experiment taking place in 2020, the so-called “AI agents” defeated human pilots in simulations in all five of their match-ups.</p><p>Pilots were onboard the X-62A fighter to tackle the situation in case of an emergency, but they did not participate in the dogfight, which took place in September last year.</p><h3>Autonomous air-to-air combat</h3><p>According to ‘Sky News’, Secretary of the Air Force Frank Kendall said, “The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now.”</p><p>Colonel James Valpiani, a commandant at the US Air Force test pilot school said, “Dogfighting is a perfect case for the application — machine learning.”</p><h3>FAQs:</h3><p>In the first ever AI dogfight conducted by the US military, one fighter was manned, while the other jet was a modified version of the F-16, called the X-62A, or VISTA, Variable In-flight Simulator Test Aircraft.</p><p><strong>How was the AI dogfight carried?<br></strong>The pair of F-16 fighter jets took off from Edwards Air Force Base in California, flew at speeds of up to 1,200mph and came close to 600 meters during aerial combat. They got engaged in combat.</p><p><em>Originally published at </em><a href="https://economictimes.indiatimes.com/news/international/us/what-is-ai-dogfight-know-how-ai-and-human-pilot-got-engaged-in-aerial-combat/articleshow/109463153.cms?fbclid=IwZXh0bgNhZW0CMTEAAR2GuXrGIAfW0zNOzR7NxLHFRrgOBVjItLHlDPZ8qKNpzF3se1J6VreXQ_0_aem_Aax80525wlPi5QnnIg0Ek2rJ0Lh8hfbn8LQReSGGQnctVd7H44_vJT5Ubt9J_xVaisHs0oF-QgcHjS_qTR4qReQ_"><em>https://economictimes.indiatimes.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=54ae351f5327" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Stop ChatGPT’s Voice Feature From Interrupting You]]></title>
            <link>https://medium.com/@stssoftwaregmbh/how-to-stop-chatgpts-voice-feature-from-interrupting-you-2f88d91b73e0?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/2f88d91b73e0</guid>
            <category><![CDATA[chatgpt]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Fri, 19 Apr 2024 12:30:14 GMT</pubDate>
            <atom:updated>2024-04-21T08:25:31.415Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*TW67ZUINBxHh_9WU.jpg" /></figure><p>I was recently waiting for my nails to dry and didn’t want to smudge the paint, when it dawned on me that this would be the perfect opportunity to test some voice-only <a href="https://www.wired.com/tag/artificial-intelligence/">artificial intelligence</a> features. Silicon Valley car owners are having long conversations with <a href="https://www.wired.com/tag/chatgpt/">ChatGPT</a> as they drive around, and I wanted to try chatting hands-free before meeting with two <a href="https://www.wired.com/tag/openai/">OpenAI</a> product leads later that day.</p><p>Even though <a href="https://www.wired.com/tag/chatbots/">chatbots</a> can be <a href="https://www.wired.com/story/how-to-use-chatgpt-brainstorm-ai/">helpful for brainstorms</a>, speaking back-and-forth with ChatGPT was like collaborating with an over-caffeinated friend who can’t stand even a second of silence. I was valiantly fighting against the artificial intelligence tool to finish a single, complete thought before it cut me off.</p><p><em>Me: I wrote a newsletter called AI Unlocked last year for our readers. In that newsletter, I …</em> <em>ChatGPT: Tell me more about your newsletter and what specific question you have in mind related to it.</em></p><p>Only a couple minutes into the experiment, I experienced synthetic-speech-induced stage fright and pleaded with the chatbot for more time, asking for it to give me a second to think between sentences. The chatbot encouraged me to slow down, though the quick cadence of its responses remained unchanged.</p><p>When I mentioned the anxiety I experienced while chatting with the AI to Joanne Jang, a model behavior lead for ChatGPT, she explained it’s an aspect of the user experience the company is trying to fix within the AI model. “In our ideal world, the model would actually be a little bit better at detecting when you’re done. So, if you’re not done with your sentence, then it wouldn’t cut you off,” Jang says. “This is something that we’re trying to figure out, and we know that it’s a pain point for our users.”</p><p>With the caveat that you shouldn’t do this while driving, she suggested a simple solution for users: Just tap on the screen. As long as you have one finger free, you can <strong>tap and hold the large circle</strong> in the center of the app during <a href="https://www.wired.com/story/chatgpt-can-now-talk-to-you-and-look-into-your-life/">conversations with the ChatGPT</a>. Keep your finger there as you’re speaking to avoid any bot interruptions; let it go whenever you’re actually wrapped up with your vocal prompt.</p><p>While Nick Turley, a ChatGPT product lead, said he prefers using the back-and-forth conversation feature, available in the app by touching the headphone icon, he recommends another method of audible interaction for users who need more time and want to slow things down a bit, or who just find the default rhythm of the AI conversation to be awkward.</p><p>In the mobile app, <strong>tap on the microphone icon</strong> next to the headphones. Say whatever you’d like to use in your prompt, and then hit the blue area to stop the recording when finished. ChatGPT will convert the audio to text and add it to the prompt field. After you press Send, listen to ChatGPT’s response by long-pressing on the output, then selecting <strong>Read Aloud</strong>. This slowed-down process is a pleasant way to interact vocally with the AI tool at your own pace, for those who might get stressed out by the service’s rapid verbal responses.</p><p><em>Originally published at </em><a href="https://www.wired.com/story/how-to-stop-chatgpt-talking-over-you/?fbclid=IwZXh0bgNhZW0CMTEAAR3oNhS3YUiQ1CtYcZQPixnUTBIPHnrmBVvYKj9QFMcJ_XF1mgk7Xsxg9v4_aem_Aaz63FFVW8_MWEnvhOSlgTN1YhVY4vJRj4u_wwwJmwbTYWQrjz1LntaQkC96jAwko25g2-xWo-L5tn3q5KYVpael"><em>https://www.wired.com</em></a><em> on April 19, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2f88d91b73e0" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[SAS aims to make AI accessible regardless of skill set with packaged AI models — AI News]]></title>
            <link>https://medium.com/@stssoftwaregmbh/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models-ai-news-5172b41db8d1?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/5172b41db8d1</guid>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Wed, 17 Apr 2024 23:37:06 GMT</pubDate>
            <atom:updated>2024-04-27T05:28:24.549Z</atom:updated>
            <content:encoded><![CDATA[<h3>SAS aims to make AI accessible regardless of skill set with packaged AI models — AI News</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Y8Mi-YXEeuuIzQ-E.jpg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*pL3zvmxTXllcEjZn.jpg" /></figure><p>Duncan is an award-winning editor with more than 20 years experience in journalism. Having launched his tech journalism career as editor of Arabian Computer News in Dubai, he has since edited an array of tech and digital marketing publications, including Computer Business Review, TechWeekEurope, Figaro Digital, Digit and Marketing Gazette.</p><p>SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on.</p><p>Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency.</p><p>Chandana Gopal, research director, Future of Intelligence, IDC, said: “SAS is evolving its portfolio to meet wider user needs and capture market share with innovative new offerings,</p><p>“An area that is ripe for SAS is productising models built on SAS’ core assets, talent and IP from its wealth of experience working with customers to solve industry problems.”</p><p>In today’s market, the consumption of models is primarily focused on large language models (LLMs) for generative AI. In reality, LLMs are a very small part of the modelling needs of real-world production deployments of AI and decision making for businesses. With the new offering, SAS is moving beyond LLMs and delivering industry-proven deterministic AI models for industries that span use cases such as fraud detection, supply chain optimization, entity management, document conversation and health care payment integrity and more.</p><p>Unlike traditional AI implementations that can be cumbersome and time-consuming, SAS’ industry-specific models are engineered for quick integration, enabling organisations to operationalise trustworthy AI technology and accelerate the realisation of tangible benefits and trusted results.</p><p><strong>Expanding market footprint</strong></p><p>Organisations are facing pressure to compete effectively and are looking to AI to gain an edge. At the same time, staffing data science teams has never been more challenging due to AI skills shortages. Consequently, businesses are demanding agility in using AI to solve problems and require flexible AI solutions to quickly drive business outcomes. SAS’ easy-to-use, yet powerful models tuned for the enterprise enable organisations to benefit from a half-century of SAS’ leadership across industries.</p><p>Delivering industry models as packaged offerings is one outcome of SAS’ commitment of $1 billion to AIpowered industry solutions. As outlined in the May 2023 announcement, the investment in AI builds on SAS’ decades-long focus on providing packaged solutions to address industry challenges in banking, government, health care and more.</p><p>Udo Sglavo, VP for AI and Analytics, SAS, said: “Models are the perfect complement to our existing solutions and SAS Viya platform offerings and cater to diverse business needs across various audiences, ensuring that innovation reaches every corner of our ecosystem.</p><p>“By tailoring our approach to understanding specific industry needs, our frameworks empower businesses to flourish in their distinctive Environments.”</p><p><strong>Bringing AI to the masses</strong></p><p>SAS is democratising AI by offering out-of-the-box, lightweight AI models — making AI accessible regardless of skill set — starting with an AI assistant for warehouse space optimisation. Leveraging technology like large language models, these assistants cater to nontechnical users, translating interactions into optimised workflows seamlessly and aiding in faster planning decisions.</p><p>Sgvalo said: “SAS Models provide organisations with flexible, timely and accessible AI that aligns with industry challenges.</p><p>“Whether you’re embarking on your AI journey or seeking to accelerate the expansion of AI across your enterprise, SAS offers unparalleled depth and breadth in addressing your business’s unique needs.”</p><p>The first SAS Models are expected to be generally available later this year.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/728/0*nP54V0GAicoS1REm.png" /></figure><p><strong>Want to learn more about AI and big data from industry leaders?</strong> Check out<a href="https://www.ai-expo.net/"> AI &amp; Big Data Expo</a> taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including <a href="https://www.blockchain-expo.com/">BlockX</a>,<a href="https://digitaltransformation-week.com/"> Digital Transformation Week</a>, and <a href="https://www.cybersecuritycloudexpo.com/">Cyber Security &amp; Cloud Expo</a>.</p><p>Explore other upcoming enterprise technology events and webinars powered by TechForge <a href="https://techforge.pub/upcoming-events/">here</a>.</p><p><em>Originally published at </em><a href="https://www.artificialintelligence-news.com/2024/04/17/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/"><em>https://www.artificialintelligence-news.com</em></a><em> on April 17, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5172b41db8d1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[New AI method captures uncertainty in medical images]]></title>
            <link>https://medium.com/@stssoftwaregmbh/new-ai-method-captures-uncertainty-in-medical-images-932b71c909b1?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/932b71c909b1</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[new-ai-tool]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Thu, 11 Apr 2024 15:00:01 GMT</pubDate>
            <atom:updated>2024-04-13T06:27:39.987Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*MHgxR35Pp99Ueuum.jpg" /></figure><p>In biomedicine, segmentation involves annotating pixels from an important structure in a medical image, like an organ or cell. Artificial intelligence models can help clinicians by highlighting pixels that may show signs of a certain disease or anomaly.</p><p>However, these models typically only provide one answer, while the problem of medical image segmentation is often far from black and white. Five expert human annotators might provide five different segmentations, perhaps disagreeing on the existence or extent of the borders of a nodule in a lung CT image.</p><p>“Having options can help in decision-making. Even just seeing that there is uncertainty in a medical image can influence someone’s decisions, so it is important to take this uncertainty into account,” says Marianne Rakic, an MIT computer science PhD candidate.</p><p>Rakic is lead author of a <a href="https://arxiv.org/pdf/2401.13650.pdf">paper</a> with others at MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital that introduces a new AI tool that can capture the uncertainty in a medical image.</p><p>Known as <a href="https://arxiv.org/pdf/2401.13650.pdf">Tyche</a> (named for the Greek divinity of chance), the system provides multiple plausible segmentations that each highlight slightly different areas of a medical image. A user can specify how many options Tyche outputs and select the most appropriate one for their purpose.</p><p>Importantly, Tyche can tackle new segmentation tasks without needing to be retrained. Training is a data-intensive process that involves showing a model many examples and requires extensive machine-learning experience.</p><p>Because it doesn’t need retraining, Tyche could be easier for clinicians and biomedical researchers to use than some other methods. It could be applied “out of the box” for a variety of tasks, from identifying lesions in a lung X-ray to pinpointing anomalies in a brain MRI.</p><p>Ultimately, this system could improve diagnoses or aid in biomedical research by calling attention to potentially crucial information that other AI tools might miss.</p><p>“Ambiguity has been understudied. If your model completely misses a nodule that three experts say is there and two experts say is not, that is probably something you should pay attention to,” adds senior author Adrian Dalca, an assistant professor at Harvard Medical School and MGH, and a research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).</p><p>Their co-authors include Hallee Wong, a graduate student in electrical engineering and computer science; Jose Javier Gonzalez Ortiz PhD ’23; Beth Cimini, associate director for bioimage analysis at the Broad Institute; and John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering. Rakic will present Tyche at the IEEE Conference on Computer Vision and Pattern Recognition, where Tyche has been selected as a highlight.</p><p><strong>Addressing ambiguity</strong></p><p>AI systems for medical image segmentation typically use <a href="https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414">neural networks</a>. Loosely based on the human brain, neural networks are machine-learning models comprising many interconnected layers of nodes, or neurons, that process data.</p><p>After speaking with collaborators at the Broad Institute and MGH who use these systems, the researchers realized two major issues limit their effectiveness. The models cannot capture uncertainty and they must be retrained for even a slightly different segmentation task.</p><p>Some methods try to overcome one pitfall, but tackling both problems with a single solution has proven especially tricky, Rakic says.</p><p>“If you want to take ambiguity into account, you often have to use an extremely complicated model. With the method we propose, our goal is to make it easy to use with a relatively small model so that it can make predictions quickly,” she says.</p><p>The researchers built Tyche by modifying a straightforward neural network architecture.</p><p>A user first feeds Tyche a few examples that show the segmentation task. For instance, examples could include several images of lesions in a heart MRI that have been segmented by different human experts so the model can learn the task and see that there is ambiguity.</p><p>The researchers found that just 16 example images, called a “context set,” is enough for the model to make good predictions, but there is no limit to the number of examples one can use. The context set enables Tyche to solve new tasks without retraining.</p><p>For Tyche to capture uncertainty, the researchers modified the neural network so it outputs multiple predictions based on one medical image input and the context set. They adjusted the network’s layers so that, as data move from layer to layer, the candidate segmentations produced at each step can “talk” to each other and the examples in the context set.</p><p>In this way, the model can ensure that candidate segmentations are all a bit different, but still solve the task.</p><p>“It is like rolling dice. If your model can roll a two, three, or four, but doesn’t know you have a two and a four already, then either one might appear again,” she says.</p><p>They also modified the training process so it is rewarded by maximizing the quality of its best prediction.</p><p>If the user asked for five predictions, at the end they can see all five medical image segmentations Tyche produced, even though one might be better than the others.</p><p>The researchers also developed a version of Tyche that can be used with an existing, pretrained model for medical image segmentation. In this case, Tyche enables the model to output multiple candidates by making slight transformations to images.</p><p><strong>Better, faster predictions</strong></p><p>When the researchers tested Tyche with datasets of annotated medical images, they found that its predictions captured the diversity of human annotators, and that its best predictions were better than any from the baseline models. Tyche also performed faster than most models.</p><p>“Outputting multiple candidates and ensuring they are different from one another really gives you an edge,” Rakic says.</p><p>The researchers also saw that Tyche could outperform more complex models that have been trained using a large, specialized dataset.</p><p>For future work, they plan to try using a more flexible context set, perhaps including text or multiple types of images. In addition, they want to explore methods that could improve Tyche’s worst predictions and enhance the system so it can recommend the best segmentation candidates.</p><p>This research is funded, in part, by the National Institutes of Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer.</p><p><em>Originally published at </em><a href="https://news.mit.edu/2024/new-ai-method-captures-uncertainty-medical-images-0411?fbclid=IwAR1z_N0lUyIKzRCagGQG-KeYSGvt3Tr01I7zEmGxnaUFs0Lh-5aiXlMbV5I"><em>https://news.mit.edu</em></a><em> on April 11, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=932b71c909b1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Embracing technological disruptions, the era of AI and machine learning]]></title>
            <link>https://medium.com/@stssoftwaregmbh/embracing-technological-disruptions-the-era-of-ai-and-machine-learning-428b122c96bc?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/428b122c96bc</guid>
            <category><![CDATA[technological-disruption]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[technological]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Wed, 10 Apr 2024 05:51:58 GMT</pubDate>
            <atom:updated>2024-04-13T06:29:21.132Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/802/0*0UtlWwKlGmv24QTR.jpg" /></figure><p>In today’s rapidly evolving world, technological advancements have unleashed a wave of disruptions and innovations across various sectors of the global economy.</p><p>The integration of AI and machine learning across Africa promises transformative effects across various sectors.</p><p>A <a href="https://www.universityworldnews.com/post.php?story=20210301092515749">recent report</a> suggests that AI and ML technology could increase Africa’s economy by a remarkable $1.5 trillion — a figure that equals half of the continent’s current gross domestic product.</p><p>Nigeria, like many developing nations, struggled for years with slow progression and cumbersome processes in its economy.</p><p>However, the emergence of technological disruptors, such as artificial intelligence (AI), Internet of Things (IoT), and machine learning, has transformed the Nigerian landscape, paving the way for unprecedented growth and development.</p><p>Artificial intelligence, the ability of computer systems to mimic cognitive functions, has the potential to revolutionize various sectors and empower Nigeria’s economy.</p><p>With AI, businesses can leverage data-driven insights and automation to enhance decision-making processes, boost operational efficiency, and derive valuable market intelligence. From customer segmentation to predictive analytics, AI enables companies to optimize their marketing strategies, improve customer experience, and enhance product development.</p><p>With more cities vying to become smart cities across the continent, Internet of Things (IoT) with its ability to create a network of interconnected devices and sensors, these cities can transform their industries, households, and public systems with intelligent transportation systems, efficient energy management solutions, fast and secure payments.</p><p>IoT-enabled solutions can be explored to address key challenges, such as traffic congestion, energy inefficiency, insecurity, and inadequate infrastructure. By employing IoT, businesses can enhance supply chain efficiency, reduce costs, and unlock new revenue streams through innovative services and products.</p><p>IoT-enabled devices facilitate seamless payment experiences, from contactless payments to smart wallets, revolutionizing how consumers interact with financial services.</p><p>By leveraging the power of machine learning, financial institutions can streamline operations, mitigate risks, and provide tailor-made services to their customers.</p><p>Machine learning, a subset of AI, enables computer systems to learn from data and improve performance without explicit programming. This technology holds the key to unlocking tremendous value across various sectors and economies throughout the African continent.</p><p>In the financial services sector, machine learning algorithms can analyze massive volumes of transactional data, detect fraud, identify patterns, personalize customer experiences, optimize pricing strategies, and anticipate market trends.</p><p>According to a <a href="https://www.tekedia.com/ai-is-gaining-traction-in-many-fintechs-across-the-african-region/">report</a> by Tekedia, AI has had a substantial impact on fintech operations. By utilizing AI, fintech firms enhance processes and user experiences in digital banking, payments, and personal finance. This adoption of AI promises increased efficiency, greater financial inclusion, and enhanced consumer experiences in Africa’s fintech sector.</p><p>Just as the impact of AI, IoT, and machine learning is not limited to sectors, it is not bound by local markets; it extends to regional and international markets, across multiple sectors. For instance, Nigeria’s digital payment and e-commerce companies play a crucial role in driving cross-border transactions, promoting financial inclusion, and bolstering the nation’s position in the global marketplace.</p><p>By embracing AI, IoT, and machine learning, businesses can offer seamless and secure payment solutions, cater to diverse customer needs, and establish a competitive edge in the international arena.</p><p>Furthermore, the intersection of AI, IoT, and machine learning has proven to have the capacity to transform Nigeria’s financial services landscape by addressing significant challenges like identity verification, credit scoring, and fraud detection.</p><p>Traditional methods of assessing creditworthiness often leave out a vast portion of the population, leading to limited access to financial services. By leveraging AI’s predictive capabilities and IoT-enabled monitoring systems, financial institutions can develop innovative credit scoring models, expanding access to credit for underserved communities.</p><p>The potential for growth through the adoption of AI, IoT, and machine learning in Nigeria is undeniable. As we navigate the digital age, business leaders, policymakers, and decision-makers must seize the opportunity to drive our economy forward.</p><p>There should be increased collaboration between the public and private sectors, deliberate investment in infrastructure and talent development, and the creation of a conducive operating environment and enabling regulations to ensure sustainable growth.</p><p>In adopting these technologies, organizations should cultivate a culture of innovation and embrace digital disruption to drive growth, improve productivity, and provide better services to customers.</p><p>Strategic initiatives around collaborations, research programs, and partnerships with technologically advanced economies and organizations will further accelerate Nigeria’s journey toward becoming a critical part of the fourth industrial revolution (FIR).</p><p>On a larger scale, it is important for African nations, including Nigeria, to collaborate and share knowledge with other countries on the continent. This can foster a culture of innovation, drive talent development, and accelerate the adoption of technological disruptions in Africa.</p><p>Governments should invest in infrastructure development and create an enabling regulatory environment to support the growth of these technologies across the continent.</p><p>A PWC <a href="https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html">study</a> highlights the significant opportunities in the adoption of AI, projecting a potential contribution of up to $15.7 trillion to the global economy by 2030. It emphasizes the need for strategic investment in AI technology to unlock this value.</p><p>AI is expected to enhance labor productivity, stimulate consumer demand, and drive substantial economic gains. AI is poised to be a key driver of transformation, disruption, and competitive advantage in the evolving economy.</p><p>As the tide of technological disruptions builds in Africa, businesses, governments, and individuals must ride the wave and lead the continent into a future driven by innovation and competitiveness.</p><p>By embracing AI, IoT, and machine learning, African countries can position themselves as global technology powerhouses and drive sustainable growth and development. The era of slow progress and cumbersome processes in Africa’s economies is behind us, and the future holds endless opportunities for transformation and prosperity.</p><p><em>This article was contributed by </em><strong><em>Ikechukwu Ugwu</em></strong><em>He is the Founder of Youths for Fintech, a non-profit organization dedicated to training, mentoring, and helping young people start and navigate their fintech careers. Ikechukwu Ugwu writes from Lagos, Nigeria.</em> <em>is an Enterprise Growth Marketing professional with over 10 years of marketing experience across several sectors such as FMCG, Technology, and Consultancy.</em> <strong>Ikechukwu Ugwu.</strong></p><p><em>Originally published at </em><a href="https://nairametrics.com/2024/04/10/embracing-technological-disruptions-the-era-of-ai-and-machine-learning/?fbclid=IwZXh0bgNhZW0CMTAAAR1eg8kSsfxh_7IQjKIZi3BcnyI9NQSFZVU44lGXLGM1lLTlcMmMwcTFRng_aem_AYot5yEF6rCDrDxmzur15w0TaaWedvF4una90ZUZLOR5F9c_JMixrJ5JmvqWCPDnul-wMUsRyw41DwT-14eujZQw"><em>https://nairametrics.com</em></a><em> on April 10, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=428b122c96bc" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Manual transcription still beats AI: A comparative study on transcription services]]></title>
            <link>https://medium.com/@stssoftwaregmbh/manual-transcription-still-beats-ai-a-comparative-study-on-transcription-services-95933bad71b6?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/95933bad71b6</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[manual-transcription]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Sat, 06 Apr 2024 06:32:13 GMT</pubDate>
            <atom:updated>2024-04-06T06:32:13.625Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*Ql7kfyfZ_O0ZscyU.jpg" /></figure><p>A research team from the Empirical Research Support (ERS) at CISPA Helmholtz Center for Information Security has conducted a systematic comparison of the most popular transcription services. The comparison involved 11 providers of manual as well as AI-based transcriptions.</p><p>It shows that, good quality notwithstanding, the latter still have problems with speaker attribution and that there are discrepancies between recording and <a href="https://techxplore.com/tags/transcription/">transcription</a> that distort meaning. Whisper AI from OpenAI delivered the best results among the AI providers.</p><p>Interviews are a popular method for collecting <a href="https://techxplore.com/tags/scientific+data/">scientific data</a>. There is a basic distinction between quantitative and qualitative interviews. While the former is designed to obtain statistically usable information from a large number of participants with the help of standardized questionnaires, the latter is aimed at obtaining interview data that allow for interpretation by the researchers.</p><p>A special type is the guided interview, in which there is a prepared list of questions, which can, however, be deviated from during the <a href="https://techxplore.com/tags/interview/">interview</a>. “In cybersecurity research, these interviews are utilized when exploring the patterns of action and interpretation of actors who operate through digital means,” explains sociologist Dr. Rafael Mrowczynski from CISPA’s Empirical Research Support (ERS) team. The ERS team advises the Center’s researchers on methodological issues.</p><h3>Converting an audio file into text</h3><p>Transcription is a crucial step in qualitative data analysis. “The standard procedure is to convert the audio recordings of the interviews into text. It is important for the quality of the data that the transcriptions are adequate,” Mrowczynski explains. Depending on the scientific field, there are different standards for transcription.</p><p>“In <a href="https://techxplore.com/tags/cybersecurity+research/">cybersecurity research</a>, we usually work with transcripts that precisely reproduce the content of the conversation,” says Mrowczynski. An adequate transcript, therefore, only contains the relevant spoken words. The researchers can obtain the transcript in two ways: Either it is created by the research team itself, or the task is outsourced to third-party providers.</p><p>Among the third-party providers, besides manual transcription, there has recently been real hype about automated, AI-based transcription. This is due to the exponential leaps in development and quality that AI applications have experienced in many areas over the last two years.</p><p>The researchers from CISPA’s ERS team wanted to know which provider on the market achieves the best results and how automated, AI-based transcription performs in comparison with manual transcription. The goal was to be able to provide the researchers at CISPA and the cybersecurity community with a recommendation for working with qualitative interviews.</p><h3>The approach of the ERS team</h3><p>For their research project, Mrowczynski and his colleagues Dr. Maria Hellenthal, Dr. Rudolf Siegel, and Dr. Michael Schilling created a test dataset. This consisted of individual interviews lasting about ten minutes and group discussions with CISPA researchers in German and English. The content focused on the research field of cybersecurity.</p><p>“It was important that technical terms from the community were included so that the precision of the transcription could be assessed,” Mrowczynski explains. Some of the interviews were additionally enhanced with background noise in order to reflect real settings in everyday research better.</p><p>The data were sent to eleven providers in December 2022. Among those were the transcription services Amberscript, GoTranscript, QualTranscribe, Rev, and Scribbl, as well as the AI-based transcription providers Amazon Transcribe, AssemblyAI, Audiotranskription.de, Google Cloud, Microsoft Azure, and Whisper by OpenAI.</p><p>For the assessment of the obtained transcripts, Mrowczynski and his colleagues created a reference transcript that served as the basis for the comparative analysis. The analysis itself then focused on two central criteria. First, the researchers assessed the word error rate, which indicates by how many words a transcript differs from the reference transcript. Second, the qualitative deviation from the reference transcript was coded manually.</p><h3>Manual transcription services beat AI</h3><p>In their paper, Mrowczynski and his colleagues conclude that, in general, “most of the manual transcription services achieve a commendable level of performance, while AI-based services often show meaning-distorting discrepancies between recording and transcription.”</p><p>The distortion of meaning can be clearly seen in technical terms; Mrowczynski explains, “In the transcript, for example, the term ‘hashes’ became ‘ashes.” That is how we came up with the title of the paper.”</p><p>OpenAI’s Whisper achieved the best results among the AI-based providers. Most providers handled English better than German. Three providers did not offer transcription for German at all. Background noise generally had a negative effect on the result. The AI-based providers particularly had problems with speaker assignments.</p><p>In addition, the transcripts created by an AI had to be reformatted before it was possible to further process them in software for qualitative data analysis. However, the researchers point out that their analysis reflects the state of the art as of December 2022 and that current developments could not be taken into account.</p><p>The research was <a href="https://dl.acm.org/doi/10.1145/3576915.3624380">presented</a> at the 2023 CCS ACM Conference on Computer and Communications Security.</p><p><strong>More information:</strong> Rudolf Siegel et al, Poster: From Hashes to Ashes — A Comparison of Transcription Services, <em>Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security</em> (2023). <a href="https://dx.doi.org/10.1145/3576915.3624380">DOI: 10.1145/3576915.3624380</a></p><p>Provided by CISPA Helmholtz Center for Information Security</p><p><em>Originally published at </em><a href="https://techxplore.com/news/2024-04-manual-transcription-ai.html"><em>https://techxplore.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=95933bad71b6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[OpenAI Can Re-Create Human Voices-but Won’t Release the Tech Yet]]></title>
            <link>https://medium.com/@stssoftwaregmbh/openai-can-re-create-human-voices-but-wont-release-the-tech-yet-450cdc2a3550?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/450cdc2a3550</guid>
            <category><![CDATA[recreate-human]]></category>
            <category><![CDATA[human-voice-software]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Sat, 30 Mar 2024 17:30:38 GMT</pubDate>
            <atom:updated>2024-03-31T06:36:40.327Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*mvMuD9_MMPWUelzU.jpg" /></figure><p>Voice synthesis has come a long way since 1978’s <a href="https://www.vintagecomputing.com/index.php/archives/528/vcg-interview-richard-wiggins-talks-speak-spell">Speak &amp; Spell</a> toy, which once wowed people with its state-of-the-art ability to read words aloud using an electronic voice. Now, using deep-learning <a href="https://www.wired.com/tag/artificial-intelligence">AI models</a>, software can create not only realistic-sounding voices but can also convincingly <a href="https://www.wired.com/story/biden-robocall-deepfake-danger/">imitate existing voices</a> using small samples of audio.</p><p>Along those lines, OpenAI this week announced Voice Engine, a text-to-speech AI model for creating synthetic voices based on a 15-second segment of recorded audio. It has provided audio samples of the Voice Engine in action <a href="http://openai.com/blog/navigating-the-challenges-and-opportunities-of-synthetic-voices">on its website</a>.</p><p>Once a voice is cloned, a user can input text into the Voice Engine and get an AI-generated voice result. But OpenAI is not ready to widely release its technology. The company initially planned to launch a pilot program for developers to sign up for the Voice Engine API earlier this month. But after more consideration about ethical implications, the company decided to scale back its ambitions for now.</p><p>“In line with our approach to AI safety and our voluntary commitments, we are choosing to preview but not widely release this technology at this time,” the company writes. “We hope this preview of Voice Engine both underscores its potential and also motivates the need to bolster societal resilience against the challenges brought by ever more convincing generative models.”</p><p>Voice cloning tech in general is not particularly new-there have been <a href="https://arstechnica.com/information-technology/2023/01/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio/">several</a> <a href="https://arstechnica.com/information-technology/2023/08/metas-massively-multilingual-ai-model-translates-up-to-100-languages-speech-or-text/">AI voice synthesis models</a> since 2022, and the tech is active in the open source community with packages like <a href="https://research.myshell.ai/open-voice">OpenVoice</a> and <a href="https://huggingface.co/coqui/XTTS-v2">XTTSv2</a>. But the idea that OpenAI is inching toward letting anyone use its particular brand of voice tech is notable. And in some ways, the company’s reticence to release it fully might be the bigger story.</p><p>OpenAI says that benefits of its voice technology include providing reading assistance through natural-sounding voices, enabling global reach for creators by translating content while preserving native accents, supporting non-verbal individuals with personalized speech options, and assisting patients in recovering their own voice after speech-impairing conditions.</p><p>But it also means that anyone with 15 seconds of someone’s recorded voice could effectively clone it, and that has obvious implications for potential misuse. Even if OpenAI never widely releases its Voice Engine, the ability to clone voices has already caused trouble in society through <a href="https://arstechnica.com/tech-policy/2023/03/rising-scams-use-ai-to-mimic-voices-of-loved-ones-in-financial-distress/">phone scams</a> where someone imitates a loved one’s voice and <a href="https://arstechnica.com/tech-policy/2024/01/robocall-with-artificial-joe-biden-voice-tells-democrats-not-to-vote/">election campaign robocalls</a> featuring cloned voices from politicians like Joe Biden.</p><p>Also, researchers and reporters <a href="https://www.vice.com/en/article/dy7axa/how-i-broke-into-a-bank-account-with-an-ai-generated-voice">have shown</a> that voice-cloning technology can be used to break into bank accounts that use voice authentication (such as Chase’s <a href="https://www.chase.com/personal/voice-biometrics">Voice ID</a>), which prompted US senator Sherrod Brown of Ohio, the chair of the US Senate Committee on Banking, Housing, and Urban Affairs, to send <a href="https://www.banking.senate.gov/imo/media/doc/bank_of_america_voice_authentication_letter2.pdf">a letter</a> to the CEOs of <a href="https://www.banking.senate.gov/newsroom/majority/brown-presses-banks-voice-authentication-services">several major banks</a> in May 2023 to inquire about the security measures banks are taking to counteract AI-powered risks.</p><p>OpenAI recognizes that the tech might cause trouble if broadly released, so it’s initially trying to work around those issues with a set of rules. It has been testing the technology with a set of select partner companies since last year. For example, video synthesis company <a href="https://www.heygen.com/">HeyGen</a> has been using the model to translate a speaker’s voice into other languages while keeping the same vocal sound.</p><p><em>Originally published at </em><a href="https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.wired.com%2Fstory%2Fopenai-voice-engine-artificial-intelligence-release%2F%3Ffbclid%3DIwAR3kykWcp7KHceaZ3NWdS0Ypdkg9QwMQXgi2eHsWakkXNOT9iwLgLwDf4d0&amp;h=AT0BICkmYUvxTi7CD5tcIstFYmC6AhZFJ4ILYFgjFAriZI1cmVub98f0K52k3G72AdUMobkcNHMiC9J_hfs8_yf7dS2-eNRZCejZtOwt1hjSe9LZXeVFWyPieHnnLfr5Nw_mqNZe9zrnWckxLUdmNCemQA&amp;__tn__=%2CmH-R&amp;c[0]=AT1rWpkvR-WzszoebbhcBQ33uCeVotJUcdTxhc9jgbaPAKnNtjx7eX5UkTyS1Mdai4R2pOq8eKZZUO65AkMg6ApIVteyYmVrys6TEa0kzmReY9qAyH72PJST1o2c8RkF7koOcJSLuqkrJTWlQUNPV9Yq0C8YoiatC3i6p2OicHjgTYRbT7gNW3QP0BcWyqOVgqtz3VzoZdUF"><em>https://www.wired.com</em></a><em> on March 30, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=450cdc2a3550" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Navigating the Data Revolution: Exploring the Booming Trends in Data Science and Machine Learning —…]]></title>
            <link>https://medium.com/@stssoftwaregmbh/navigating-the-data-revolution-exploring-the-booming-trends-in-data-science-and-machine-learning-fcb504eb45b2?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/fcb504eb45b2</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[booming]]></category>
            <category><![CDATA[data-revolution]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Sat, 23 Mar 2024 07:54:11 GMT</pubDate>
            <atom:updated>2024-03-23T07:54:11.096Z</atom:updated>
            <content:encoded><![CDATA[<h3>Navigating the Data Revolution: Exploring the Booming Trends in Data Science and Machine Learning — KDnuggets</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*CcKOkCPeNSTk9Tuh.jpg" /></figure><p>Image generated with DALLE-3</p><p>In the ever-evolving landscape of technology, the data revolution emerges as a formidable force, reshaping the fabric of industries, economies, and societal norms. Data science and machine learning are at the heart of this transformative surge, serving as crucial catalysts for innovation. They propel us into an era where problem-solving transcends mere human cognition, evolving into a collaborative dance between human intellect and intelligent machines. This article embarks on a comprehensive journey, delving into the emerging trends within data science and machine learning, uncovering the pivotal developments steering us toward a future powered by data.</p><p>A significant trend in data science and machine learning revolves around incorporating artificial intelligence (AI) to drive automation. Industries across the spectrum are harnessing the potential of machine learning algorithms to streamline everyday tasks, fine-tune processes, and boost efficiency. Whether in manufacturing, healthcare, finance, or logistics, the wave of AI-powered automation is fundamentally transforming the operational landscape of businesses. This shift trims costs and elevates overall productivity, marking a revolutionary stride in how enterprises navigate their day-to-day functions.</p><h3>Use Cases</h3><p>In finance, automated trading systems have taken center stage, employing the power of machine learning to dissect market trends and seamlessly execute trades in real time. It’s a sophisticated technology integration into the dynamic realm of financial markets, ushering in a new era of efficiency and data-driven decision-making.</p><p>Image from</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*FEZRlaLiFvCtNWFl.png" /></figure><p><a href="https://www.aismartz.com/blog/use-cases-of-ai-in-the-finance-sector/">AISmartz</a></p><p>In healthcare, the incredible capabilities of machine learning algorithms are stepping into pivotal roles. These algorithms are lending a helping hand in diagnostics, offering insights into predictive analytics for patient outcomes, and even contributing to the precision of robotic surgeries. It’s a remarkable fusion of technology and medicine that’s reshaping the landscape of patient care.</p><p>Natural Language Processing (NLP) has taken center stage in the expansive realm of machine learning. Thanks to strides in deep learning models such as GPT-3, machines are rapidly evolving, displaying a remarkable proficiency in deciphering and generating language that mimics human expression. This transformative trend is reshaping how we engage with technology, from the intuitive responses of chatbots and virtual assistants to the seamless intricacies of language translation and content creation. The newfound ability of machines to grasp and respond to natural language not only redefines our communication landscape but also opens up novel avenues for enhanced accessibility across various domains.</p><h3>Use Cases</h3><p>Models like GPT-3 have transformed the landscape of content creation and writing industries by producing text resembling human language. Their influence is palpable, ushering in a new era where artificial intelligence collaborates with writers to craft compelling and coherent content.</p><p>Image from</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*l3CJLR_6cgyczqFs.png" /></figure><p><a href="https://www.analyticsvidhya.com/blog/2023/03/ai-content-creation/">AnalyticsVidhya</a></p><p>Natural Language Processing (NLP) plays a pivotal role in the functionality of chatbots such as Siri and virtual assistants like Alexa. It’s the magic behind their knack for comprehending and responding to our everyday language queries, making interactions more human and intuitive.</p><p>In language translation, Google Translate relies on the finesse of Natural Language Processing (NLP) to deliver precise and accurate translations across various languages. This sophisticated use of technology makes seamless communication possible across linguistic boundaries.</p><p>In the ever-evolving decision-making landscape, the pivotal role of data cannot be overstated. What’s increasingly taking the spotlight is the imperative need for ethical considerations in AI and data science. There’s a noticeable surge in the recognition of ethical principles as integral elements in the development and deployment phases of machine learning models. Issues such as bias, fairness, transparency, and accountability have risen to the forefront of discussions, shaping the narrative around responsible data science practices. Organizations are actively embracing this ethical shift, adopting frameworks and guidelines that seek to strike a delicate balance between innovation and ethical considerations, steering the course toward a more conscientious era in the world of data.</p><h3>Use Cases</h3><p>The ethical landscape surrounding facial recognition technology is complex, primarily because of the potential biases inherent in the system. This has prompted a pressing need for conscientious and responsible deployment, as the consequences of biased facial recognition can have profound implications on privacy, security, and social justice.</p><p>Navigating the terrain of credit scoring with machine learning demands meticulous consideration, as the models involved must be crafted with precision to mitigate any potential discriminatory practices. This conscientious approach is crucial to ensure fairness and equity in lending practices, acknowledging these models’ significant impact on individuals’ financial opportunities.</p><p>The widespread adoption of Internet of Things (IoT) devices has triggered a notable upswing in data generation right at the edge of networks. A trend gaining significant traction is the fusion of edge computing with decentralized machine learning geared towards processing data near its source. This strategic move holds the promise of curbing latency and optimizing bandwidth usage. Its relevance is especially pronounced in sectors like autonomous vehicles, smart cities, and industrial IoT, where split-second decision-making is paramount. Integrating machine learning models into edge devices is instrumental in fostering systems that are intelligent and highly responsive to real-time demands.</p><h3>Use Cases</h3><p>In the realm of autonomous vehicles, edge computing has proven transformative. Enabling the swift processing of data directly from sensors empowers these vehicles to make rapid decisions, enhancing their ability to navigate the road with agility and ensuring a level of responsiveness critical to their safe and efficient operation</p><p>Incorporating decentralized machine learning into smart city applications marks a significant stride forward. This innovation facilitates real-time data analysis from various sensors, contributing to the city’s overall efficiency by providing timely insights for better decision-making and resource allocation. It exemplifies the seamless technology integration to create more intelligent, responsive urban environments.</p><p>Image from</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*dS0O1Ztv_Sno1Nt5.png" /></figure><p><a href="https://towardsdatascience.com/how-ai-can-help-smart-city-initiatives-f83484891343">TowardsDataScience</a></p><p>The landscape of data science and machine learning is expanding beyond traditional boundaries, evolving into an interdisciplinary domain. There’s a noticeable trend wherein professionals from diverse backgrounds collaborate seamlessly to tackle intricate problems. The demand for hybrid skill sets, amalgamating proficiency in data science, domain-specific knowledge, and effective communication, is steadily increasing. In this interconnected data ecosystem, professionals adept at bridging the gap between technical intricacies and understanding non-technical stakeholders are emerging as increasingly invaluable assets.</p><h3>Use Cases</h3><p>In the intricate realm of healthcare, a dynamic collaboration unfolds as data scientists and healthcare professionals join forces. Together, they sift through vast troves of patient data, applying their combined expertise to glean valuable insights to enhance treatment outcomes and usher in a new era of personalized and effective healthcare solutions.</p><p>Collaboration emerges at the intersection of finance and data science as professionals with dual expertise unite forces. Together, they channel their knowledge to craft predictive models that delve into the intricate tapestry of market trends, exemplifying a harmonious blend of financial acumen and data-driven insights.</p><p>Fueled by data science and machine learning, the ongoing data revolution fundamentally reshapes our daily lives and professional landscapes. Whether it’s the advent of AI-powered automation, the increasing emphasis on ethical considerations, or the collaborative synergy of interdisciplinary approaches, the discussed trends provide a nuanced glimpse into these fields’ dynamic and ever-evolving nature. Successfully navigating this revolution necessitates a steadfast commitment to staying abreast of developments, embracing responsible practices, and cultivating a culture of perpetual learning. Looking ahead, the convergence of data science and machine learning promises to unravel new possibilities, continuously propelling innovation across diverse industries.</p><p><a href="https://www.linkedin.com/in/aryan-garg-1bbb791a3/"><strong>Aryan Garg</strong></a> is a B.Tech. Electrical Engineering student, currently in the final year of his undergrad. His interest lies in the field of Web Development and Machine Learning. He have pursued this interest and am eager to work more in these directions.</p><p><em>Originally published at </em><a href="https://www.kdnuggets.com/navigating-the-data-revolution-exploring-the-booming-trends-in-data-science-and-machine-learning"><em>https://www.kdnuggets.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fcb504eb45b2" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A change in the machine learning landscape]]></title>
            <link>https://medium.com/@stssoftwaregmbh/a-change-in-the-machine-learning-landscape-d841171a7f2e?source=rss-902ba6c43c66------2</link>
            <guid isPermaLink="false">https://medium.com/p/d841171a7f2e</guid>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[STS Software GmbH]]></dc:creator>
            <pubDate>Sat, 23 Mar 2024 07:51:57 GMT</pubDate>
            <atom:updated>2024-03-23T07:51:57.003Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*lswQ_I17gr5Bv_P5" /></figure><p>Federated learning marks a milestone in enhancing collaborative model AI training. It is shifting the main approach to <a href="https://www.infoworld.com/article/3214424/what-is-machine-learning-intelligence-derived-from-data.html">machine learning</a>, moving away from the traditional centralized training methods towards more decentralized ones. Data is scattered, and we need to leverage it as training data where it exists.</p><p>This paradigm is nothing new. I was playing around with it in the 1990s. What’s old is new again… again. Federated learning allows for the collaborative training of machine learning models across multiple devices or servers, harnessing their collective data without needing to exchange or centralize it. Why should you care? Security and privacy, that’s why.</p><p>Here are the core principles of federated learning:</p><ul><li><strong>Decentralization of data</strong>: Unlike conventional methods that require data to be centralized, federated learning distributes the model to the data source, thus using data where it exists. For instance, if we’re keeping data on a fracturing robot to monitor operations, there is no need to migrate that data to some centralized data repository. We leverage it directly from the robot. (This is an actual use case for me.)</li><li><strong>Privacy preservation</strong>: Federated learning enhances user privacy by design because the data remains on users’ devices, such as phones, tablets, computers, cars, or smartwatches. This minimizes the exposure of sensitive information since we’re going directly from the device to the AI model.</li><li><strong>Collaborative learning</strong>: A model is able to learn from diverse data sets across different devices or servers, naturally.</li><li><strong>Efficient data utilization</strong>: Federated learning is particularly useful for problem domains with massive, distributed, or sensitive data. It optimizes the use of available data while respecting privacy policies that are native to the specific distributed data set.</li></ul><p>These factors are useful for AI, offering better security and privacy. Also, we’re not storing the same data in two different places, which is the common practice today in building new AI systems, such as <a href="https://www.infoworld.com/article/3689973/what-is-generative-ai-artificial-intelligence-that-creates.html">generative AI</a>.</p><h3>The RoPPFL framework</h3><p>Federated learning offers the promising prospect of collaborative model training across multiple devices or servers without needing to centralize the data. However, there are still security and privacy concerns, primarily the risk of local data set privacy leakage and the threat of AI model poisoning attacks by malicious clients.</p><p>What will save us? Naturally, when a new problem comes along, we must create unique solutions with cool names and acronyms. Let me introduce you to the <a href="https://www.sciencedirect.com/science/article/abs/pii/S1389128624001531?dgcid=rss_sd_all">Robust and Privacy-Preserving Federated Learning (RoPPFL) framework</a>, a solution to address the inherent challenges associated with federated learning in <a href="https://www.networkworld.com/article/964305/what-is-edge-computing-and-how-it-s-changing-the-network.html">edge computing</a> environments.</p><p>The RoPPFL framework introduces a blend of local differential privacy (LDP) and similarity-based Robust Weighted Aggregation (RoWA) techniques. LDP protects data privacy by adding calibrated noise to the model updates. This makes it exceedingly difficult for attackers to infer individual data points, which is a common security attack against AI systems.</p><p>RoWA enhances the system’s resilience against poisoning attacks by aggregating model updates based on their similarity, mitigating the impact of any malicious interventions. RoPPFL uses a hierarchical federated learning structure. This structure organizes the model training process across different layers, including a cloud server, edge nodes, and client devices (e.g., smartphones).</p><h3>Improved privacy and security</h3><p>RoPPFL represents a step in the right direction for a cloud architect who needs to deal with this stuff all the time. Also, 80% of my work is generative AI these days, which is why I’m bringing it up, even though it’s borderline academic jargon.</p><p>This model addresses the simultaneous challenges of security and privacy, including the use of edge devices, such as smartphones and other personal devices, as sources of training data for data-hungry AI systems. The model can combine local differential privacy with a unique aggregation mechanism. The RoPPFL framework paves the way for the collaborative model training paradigm to exist and thrive without compromising on data protection and privacy, which is very much at risk with the use of AI.</p><p>The authors of the article that I referenced above are also the creators of the framework. So, make sure to read it if you’re interested in learning more about this topic.</p><p>I bring this up because we need to think about smarter ways of doing things if we’re going to design, build, and operate AI systems that eat our data for breakfast. We need to figure out how to build these AI systems (whether in the cloud or not) in ways that don’t do harm.</p><p>Given the current situation where enterprises are standing up generative AI systems first and asking the important questions later, we need more sound thinking around how we build, deploy, and secure these solutions so they become common practices. Right now, I bet many of you who are building AI systems that use distributed data have never heard of this framework. This is one of many current and future ideas that you need to understand.</p><p>Copyright © 2024 IDG Communications, Inc.</p><p><em>Originally published at </em><a href="https://www.infoworld.com/article/3714680/a-change-in-the-machine-learning-landscape.html"><em>https://www.infoworld.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d841171a7f2e" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>