<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Licenseware - Medium]]></title>
        <description><![CDATA[Our vision is to make software asset management a commodity for organizations, by reducing complexity and lowering investment cost. We are a start-up developing the first open app ecosystem for software license management. - Medium]]></description>
        <link>https://medium.com/licenseware?source=rss----7cb685824622---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 15:57:57 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/licenseware" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Unraveling the Tangle: SOA, Microservices, and the Myth of the ‘Bad Implementation’]]></title>
            <link>https://medium.com/licenseware/unraveling-the-tangle-soa-microservices-and-the-myth-of-the-bad-implementation-0cb2b6c317e7?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/0cb2b6c317e7</guid>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[soa]]></category>
            <category><![CDATA[enterprise-software]]></category>
            <dc:creator><![CDATA[Ciprian Grigore]]></dc:creator>
            <pubDate>Wed, 08 Nov 2023 18:55:32 GMT</pubDate>
            <atom:updated>2023-11-08T18:55:32.278Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SrRmAVjz6a55R_7U0gxyug.png" /></figure><p>In the bustling frontier of software architecture, where ‘microservices’ is often the buzzword du jour, there lies an understated foundation that many modern developers may not have directly encountered yet interact with daily: Service-Oriented Architecture (SOA). Before microservices began dominating tech conversations, SOA revolutionized how businesses thought about IT systems. It broke down monolithic applications into interoperable services, introducing a level of flexibility and reuse that was unprecedented at the time.</p><p>Today, as we celebrate the agility and scalability that microservices promise, it’s crucial to acknowledge that they didn’t emerge in a vacuum. SOA provided the blueprint for decoupled services, which microservices have refined and scaled down. This post is a nod to the game-changing era of SOA, aiming to bridge the knowledge gap and show how its principles are still at the heart of many ‘modern’ architectures. It’s a journey back to the roots to explore how SOA shaped the path for the distributed systems we strive to perfect today.</p><h3>Historical Context: The Technical Trek from Databases to Distributed Services</h3><p>Let’s rewind to the beginning: applications were essentially front-ends directly connected to databases. This was simple, but as scalability became a buzzword and more users hit the systems, this model started to crack. Performance issues and the struggle to maintain a growing codebase became the bane of developers’ existence.</p><p>As these concerns grew, middleware solutions came into play, acting as an intermediary layer that handled the business logic and data access. This setup started to address the issues of scalability but introduced new challenges in managing the middle layer. Nevertheless, it was a pivotal step towards more sophisticated architectures, setting the stage for what would become a transformative approach: Service-Oriented Architecture (SOA).</p><p>SOA was a paradigm shift. It wasn’t just about connecting points A and B; it was about defining services that encapsulated business functions and exposed them through well-defined interfaces. Using XML for data interchange and SOAP for messaging, SOA enabled a level of decoupling that allowed for independent development, testing, and deployment of services.</p><p>The challenges with SOA were distinct: services could become too granular, leading to a proliferation of interfaces that were difficult to manage or too generic, resulting in a loss of the agility that SOA aimed to provide. Moreover, the heavy reliance on XML and SOAP often meant a performance hit due to the size and complexity of the messages being passed around.</p><p>Despite these challenges, SOA provided a solid foundation for the principles that would underpin microservices. It introduced the idea of breaking down applications into discrete components that could be developed, maintained, and upgraded independently. Tools and platforms evolved to support SOA’s principles, such as enterprise service buses (ESBs) that managed the orchestration of service interactions and complex event processing engines that handled asynchronous service communication.</p><p>In essence, SOA was not merely a stepping stone but a significant leap forward that established many of the core principles that are now celebrated in modern microservices architectures. It gave us the conceptual and practical groundwork for building scalable, modular applications. While microservices have since taken the limelight, often focusing on the finer details of containerization and orchestration, it’s SOA that laid the groundwork for thinking about software as a composition of services, each with its own life cycle.</p><p>Understanding SOA&#39;s history and role is crucial, especially for the new generation of developers. It’s a reminder that many of the ‘new’ concepts they work with today are deeply rooted in the evolution that SOA represented. This look back is more than nostalgia; it’s about recognizing the depth and breadth of SOA’s influence on how we build software now and in the future.</p><h3>Defining the Terms: Distinguishing SOA, Microservices, and Distributed Monoliths</h3><p>When we talk about distributed architectures, it’s not a one-size-fits-all scenario. There’s a spectrum, and understanding where each architectural style sits on this spectrum is key to demystifying the whole concept.</p><p><strong>SOA (Service-Oriented Architecture)</strong>: This is the granddaddy of distributed architectures. SOA is about designing applications as a collection of services that can be reused for different purposes. The focus here is on business functionality — each service encapsulates a business process and exposes a set of interfaces. SOA is protocol-agnostic, although it historically favored communication via SOAP over HTTP, using XML as the message format.</p><p>SOA services tend to be larger and more comprehensive than what you’d see in a microservices architecture. They are designed to be platform-independent, interoperable, and often involve an enterprise service bus (ESB) to manage service interactions.</p><p><strong>Microservices</strong>: This is the new kid on the block, at least compared to SOA. Microservices take the concept of SOA but go for a finer-grained approach. Each service is small, highly focused on doing one thing, and independently deployable. Communication is typically lighter and faster, often using REST over HTTP or lighter protocols like gRPC.</p><p>Microservices are designed to be polyglot, both in terms of data storage and the languages they’re written in, and they lean heavily on DevOps concepts like orchestration and service meshes.</p><p><strong>Distributed Monolith</strong>: Just because an application is split into multiple services doesn’t mean it’s reaping the benefits of a distributed system. A distributed monolith is when you have services that are so tightly coupled they might as well be a monolith. Changes in one service necessitate changes in others, and they often have to be deployed together.</p><p>The key takeaway is that not every distributed architecture you see is a microservices architecture, and not every tightly coupled distributed system is a distributed monolith by default. It’s the design principles and the way the services interact that determine the architecture. SOA and microservices share a common heritage but diverge significantly in scale and philosophy. Understanding these distinctions is crucial in recognizing the architecture you’re working with — or aiming to build.</p><h3>SOA: Thriving in the Enterprise</h3><p>SOA, as an architectural pattern, is not confined to the technologies that popularized it; instead, it is defined by its ability to evolve with the technological landscape. It’s true that numerous legacy systems still rely on XML and SOAP, the stalwarts of traditional SOA implementations. These systems continue to serve critical business functions in many enterprises, proving the lasting value of the SOA pattern.</p><p>However, the longevity of SOA is not about clinging to older technologies — it’s about the pattern’s inherent adaptability. SOA was a revolutionary idea that laid the groundwork for modular and distributed system design, and it remains just as relevant today as when it was first conceived. The core principles of SOA — loose coupling, service abstraction, and service reusability — are timeless and technology-agnostic.</p><p>In the modern context, SOA continues to thrive, not just in its classic form but by embracing new technologies. While traditional tools like Oracle’s SOA Suite and Red Hat’s JBoss provide a robust infrastructure for enterprise systems, the pattern has expanded to include contemporary technologies such as Kafka for streamlining data flows and cloud platforms for agility and scalability.</p><p>The shift from SOAP to REST, from XML to JSON, and the integration of AMQP and MQTT protocols illustrate SOA’s versatility. These changes don’t represent a departure from SOA but rather its natural progression. The architecture is not static; it is dynamic and can incorporate a range of technologies, from the established to the cutting-edge.</p><p>SOA’s true strength lies in its foundational design principles, which can be applied across different eras of technology. Enterprises can and do implement SOA with a mix of legacy and modern technologies, choosing the best tools for their specific context and needs.</p><p>The evolution of SOA is a reflection of the evolution of enterprise IT itself — a continuum of innovation that respects the past’s contributions while forging ahead into the future. SOA’s principles have proven to be enduring, and as long as they continue to be relevant to business needs, SOA will remain an indispensable part of the enterprise architecture landscape.</p><h3>Key Differences Between SOA and Microservices</h3><p>While SOA and microservices share common ground as service-oriented architectures, they diverge in several key areas. Understanding these differences is crucial for architects and developers when deciding which pattern best fits their project needs.</p><p><strong>Granularity</strong>: SOA typically defines services at a more coarse-grained level, oriented around business capabilities and often encompassing multiple business functions. Microservices take granularity to the extreme, with services often reflecting individual business functions or processes.</p><p><strong>Component Sharing</strong>: In SOA, the components or services can be shared among multiple applications or consumer services. This promotes reusability but can also create dependencies. On the other hand, Microservices favor service independence and isolation — even at the cost of some redundancy — to avoid any coupling between services.</p><p><strong>Data Management</strong>: SOA architectures often rely on a single data store or a few integrated databases. Microservices push for a decentralized approach to data management, where each service manages its own database, if necessary. This approach prevents data dependencies across services, facilitating easier scaling and resilience.</p><p><strong>Communication Protocols</strong>: SOA’s communication is traditionally based on enterprise-level standard protocols like SOAP, which can be heavy and require significant overhead. Microservices tend to use REST, gRPC, or messaging systems like Kafka for lightweight, often asynchronous communication.</p><p><strong>Governance and Operations</strong>: SOA comes with centralized governance, which enforces standards and protocols across all services. This can be beneficial for uniformity but may slow down development. Microservices advocate for decentralized governance, giving individual teams full control over their services from development to deployment, which aligns with the DevOps culture.</p><p><strong>Performance and Scalability</strong>: SOA’s shared resources and synchronous communication can lead to performance bottlenecks. Microservices architecture is designed to overcome these limitations by allowing services to be scaled independently, often using containers that can be deployed across multiple servers or cloud environments.</p><p><strong>Deployment</strong>: Deployment in SOA is generally less frequent and often requires coordination across different services due to shared dependencies. Microservices embrace continuous delivery, where services can be deployed independently and frequently, supporting a more agile development process.</p><p><strong>Interoperability</strong>: SOA excels in enterprise environments where different services must interact with each other and legacy systems. Microservices are more suited to greenfield projects or when an application can be developed from scratch without extensive backward compatibility requirements.</p><h3>Closing Thoughts: The Persistent Relevance of SOA in a Modern Landscape</h3><p>The strategic implementation of SOA across various industries underscores its undiminished relevance and adaptability in the face of evolving technological landscapes. Financial services, healthcare, government, and retail showcase the broad impact of SOA, where it has driven agility, reusability, and scalability within IT systems.</p><p>SOA has been a game-changer in the financial world, enabling institutions to untangle complex systems for heightened flexibility. This adaptability is crucial for staying compliant with dynamic market regulations and integrating cutting-edge fintech innovations. Healthcare organizations, too, have harnessed SOA to bridge the gaps between siloed systems, facilitating seamless data flow and unified patient care experiences.</p><p>The narrative of SOA today is woven intricately with the advancement of cloud technologies. Organizations are not just using SOA to make their existing systems more efficient; they are using SOA principles to transition to the cloud. These systems are reborn by encapsulating legacy functionalities within service interfaces, gaining new life as part of sophisticated, cloud-native ecosystems.</p><p>This synergy between SOA and modern API-driven architectures illustrates a crucial point: SOA is not static. It’s a vibrant, evolving framework capable of embracing change, whether that’s in the form of APIs, the cloud, or even microservices. SOA’s principles remain as relevant as ever, providing a robust foundation for businesses to build upon for the foreseeable future.</p><p>As we stand at the intersection of legacy systems and the new digital age, SOA remains a key player — facilitating transformations, enabling innovations, and supporting the ever-growing demands of businesses worldwide. It’s a testament to the enduring power of a well-conceived architectural pattern and its ability to stand the test of time.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0cb2b6c317e7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/unraveling-the-tangle-soa-microservices-and-the-myth-of-the-bad-implementation-0cb2b6c317e7">Unraveling the Tangle: SOA, Microservices, and the Myth of the ‘Bad Implementation’</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Is software getting worse?]]></title>
            <link>https://medium.com/licenseware/is-software-getting-worse-3e8ec6254af4?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/3e8ec6254af4</guid>
            <category><![CDATA[user-experience]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Ciprian Grigore]]></dc:creator>
            <pubDate>Thu, 26 Oct 2023 13:09:57 GMT</pubDate>
            <atom:updated>2023-10-26T13:09:56.915Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9JTEGQr_8jbJkMGpaikkBA.png" /></figure><p>Think back to the last couple of years. How many times have you been annoyed with a recent update to one of your favorite software products? How many times did you find bugs in production software? Bugs are almost a natural occurrence in the world of software, from burning Teslas and major vulnerabilities like the log4j fiasco to Chrome eating your computer’s RAM like the cookie monster, bugs are just inevitable.</p><p>As software is making its way into every single aspect of our lives, I believe our tolerance for issues has increased exponentially. We shrug off service unavailability, we hit refresh as many times as it takes, and we click through countless submenus for that action that used to be a button 2 iterations ago. The limit to the number of hoops we’ll jump through to get what we need from the products we use is sky-high.</p><p>So I ask myself where did things go wrong, from the grazing animal analogy of the software user and the simplistic beauty of This is a motherfucking website, how did we end up in this place where the user experience is secondary to management goals, story points completed and Ai features implemented?</p><p>Here are a couple of theories I could come up with.</p><ol><li>Keeping up with the skill requirements is hard. One of the recurring pains I read about in programming communities on Reddit is that it’s increasingly difficult to keep up with all the technologies, frameworks, and methodologies out there. To stay relevant as a software engineer today, you have to work 10 times harder than even 5 years ago. Nowadays, engineers are only required to be proficient in whatever stack their team is using, but they are also supposed to pick up new and sometimes very complex technologies that haven’t even been released for more than a year.</li><li>There’s just so many waves. Cloud, big data microservices, serverless, Kubernetes, streaming data, transformers, LLMs, and the list goes on. We’re barely figuring out what tools and patterns work well for concepts that came out 10 years ago, but nobody is willing to slow down and risk losing the race, so we’re pushing through. We scour the internet for tutorials, devour documentation, and spend long nights experimenting, only to come up with half-assed MVPs that will put us on the list of companies doing that thing or using that service. Forget about user interviews, forget about assessing if that’s really a need for our users or if there’s a simpler way to do it, the hype is more important. I’m not sure if this has been different historically, trying to get a competitive edge in business is absolutely expected, but I believe that in the technological revolution that we’re experiencing, it has become very difficult to put out good products and keep up with the demand for innovation.</li><li>Working in software development is too romanticized. Everyone not working in the industry has this crazy idea that once you’re a developer you will make more money, work less and look better. I’ve seen people wanting to switch to software development from professions like cooks, builders, salesmen, and even doctors! Do these people actually have a passion for software? Are they naturally born makers? Probably not. And most likely many of them fail at making the switch when they realize it’s not as easy as it seems. But given the huge demand for talent, excepting the great layoffs of the past couple of years, you can be quite sure some of the features you’re using are not built with passion by artisans but rather by someone trying to finish the tasks in their sprint so they can get back to their character in Baldur’s Gate. Back when Bill Gates was saying that developers should be lazy because they will find the easiest and best way to do something, I’m not sure this is what he had in mind.</li><li>Highly scalable SaaS products require (maybe) complex architectures. When Reddit hosts their yearly r/place event, it needs to put special systems in place in order to handle the massive traffic generated. Twitter has completely rearchitected its platform from very simple to increasingly complex back to simplification and who knows where they are now. Success means scale and scale means complex systems with multiple points of failure. It also means that it’s very difficult to test and develop for those systems since you don’t really want every single developer to run the entire microservices stack on their local machine. The world of client software installed locally and functioning offline is way behind us and delivering solid performance at a large scale is not an easy job. Of course, issues will slip through the cracks.</li><li>UX is a second thought. I feel like there was a moment in our industry when UX was a critical function in product development. Then some managers somewhere read a McKinsey developer productivity report and decided that story points are massively more important and good UX is just delaying them. I don’t know if that’s the reason, or simply UX designer feel like they need to earn their wages and so they definitely need to bring changes to things that didn’t need improving. Either way, some products seem to be completely bipolar when it comes to UX (looking at you, Chrome), with one update hiding important buttons or disabling features only to bring them back in future releases.</li></ol><p>I have to admit I haven’t done real research on this topic and the above are just shower thoughts and general frustrations when products I’ve loved for a long time became worse overnight. I would love to see some statistics on the topic or even hear different opinions.</p><p>Regardless, I believe people have become significantly more tolerant when it comes to changes and issues with software, and that’s great overall. We are no longer grazing animals that will give up in seconds if we don’t find what we’re looking for, quite the opposite, we want those features to work and we’ll work for it if we have to. Too bad that when this happens, people building the features get lazy.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3e8ec6254af4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/is-software-getting-worse-3e8ec6254af4">Is software getting worse?</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Simplifying Architecture: Moving from Microservices to a Monolith ]]></title>
            <link>https://medium.com/licenseware/simplifying-architecture-moving-from-microservices-to-a-monolith-6f8ad82af508?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/6f8ad82af508</guid>
            <category><![CDATA[monolithic-architecture]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Licenseware]]></dc:creator>
            <pubDate>Thu, 21 Sep 2023 09:25:49 GMT</pubDate>
            <atom:updated>2023-09-21T09:25:49.064Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jsRkkgJ8rJWWF2Zzbn2N1w.png" /></figure><p>As the <a href="https://www.linkedin.com/in/cipriangrigore1/">CTO</a> of a <a href="https://www.linkedin.com/company/licenseware">tech startup</a>, you’re constantly faced with important decisions that can shape the trajectory of your company. One such decision that we made at the end of 2022 was to migrate from a microservices architecture to a monolith. This transition came with its fair share of pros and cons, but in hindsight, it has proven to be the correct choice for our company. In this article, we’ll delve into the reasons behind this move and the valuable lessons we learned from it.</p><h3>The Microservices Maze</h3><p>In our previous architecture, each application operated as an independent service, complete with its own infrastructure, data model, and API. On top of these services, we had an ‘aggregation layer’ that provided authentication and authorization, along with a single API that the front-end utilized, creating a seamless user experience. Theoretically, this setup offered the ability to scale individual services based on their usage and make changes in one service without impacting the others. However, reality had a different story to tell.</p><h3>Lessons Learned and Improvements Observed</h3><p><strong>1. Economy of Scale</strong></p><p>In various industries, it’s a well-known fact that a single, large engine carrying multiple loads is more efficient than multiple smaller engines. This concept applies equally to computing as it does to trains or cargo ships. Operating separate compute resources for each service proved significantly more expensive than consolidating them into a single, robust server. As a startup in its early years, our usage patterns were often idle or low, with occasional spikes when customers processed data. This led to us either paying for servers that sat idle or witnessing them struggle to meet peak demands due to high compute costs.</p><pre>💡 Lesson: Architect for the scale you need, not the scale you dream of.</pre><p><strong>2. Development Experience</strong></p><p>In our previous architecture, development was a cumbersome process. Every engineer had to become proficient with technologies like Docker, Kubernetes, and Bash. While these skills are valuable, the constant need to spin up multiple services, manage dependencies, network configurations, and storage, all while dealing with issues arising from tightly coupled services, meant that valuable development time was spent on DevOps tasks rather than building features and addressing bugs. Since the migration to a monolith, our development speed has improved by at least 50%.</p><pre>💡 Lesson: Prioritize development speed and add complexity only when absolutely necessary. Every piece of infrastructure you add to your architecture will require every developer to deal with it.</pre><p><strong>3. Simplified Monitoring</strong></p><p>In the microservices world, monitoring can become a headache. With more than ten services running concurrently, we had ten different potential points of failure. Setting up alerts for each service, sifting through multiple log systems, and tracing network activity from one service to another could sometimes take over an hour to identify complex issues that spanned multiple services. This also compelled us to use complex monitoring solutions, each with its own learning curve and requirements. Post-migration, it now takes us an average of just ten minutes to identify an issue, with the need to check only one or two log streams.</p><pre>💡 Lesson: Monitoring is supposed to make your life easier; don&#39;t let it become a source of complexity.</pre><h3>The Path Forward</h3><p>While there are more benefits we could discuss, we’ll keep this post concise. Overall, transitioning from a complex microservices architecture to a simple monolith has proven to be a hugely beneficial decision for our specific use case. It has granted us the stability and confidence to focus on building the product our customers need. In the future, we may reevaluate whether some services should be decoupled from the monolith, but for now, keeping things simple has been the key to our success.</p><p>In the ever-evolving tech landscape, adapting your architecture to meet your current needs and priorities is essential. Our journey from microservices to a monolith has been a testament to the importance of simplicity, efficiency, and adaptability in building a successful tech startup.</p><p>Source: <a href="https://licenseware.io/simplifying-architecture-moving-from-microservices-to-a-monolith-%f0%9f%97%bf/">https://licenseware.io/simplifying-architecture-moving-from-microservices-to-a-monolith-%f0%9f%97%bf/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6f8ad82af508" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/simplifying-architecture-moving-from-microservices-to-a-monolith-6f8ad82af508">Simplifying Architecture: Moving from Microservices to a Monolith 🗿</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From Spreadsheets to Scripts (and then Licenseware) ]]></title>
            <link>https://medium.com/licenseware/from-spreadsheets-to-scripts-and-then-licenseware-6d0fcfd233d8?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/6d0fcfd233d8</guid>
            <category><![CDATA[scripting]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[software-licensing]]></category>
            <category><![CDATA[excel]]></category>
            <category><![CDATA[spreadsheets]]></category>
            <dc:creator><![CDATA[Licenseware]]></dc:creator>
            <pubDate>Thu, 21 Sep 2023 09:23:13 GMT</pubDate>
            <atom:updated>2023-09-21T09:23:13.076Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PYNX-DM7V8niu1o507E6Zg.png" /></figure><p>In my early days as a data analyst, I was engaged in a project for a big corporation struggling to keep track of its vast IT asset inventory. My task was to help them manage, analyze, and make sense of this ocean of data. The tool at hand? Good ol’ Excel… 😬</p><p>The client’s portfolio was vast — ranging from servers to routers, desktop PCs to laptops, and mobile devices to software licenses. It was my responsibility to track their lifecycle, predict future needs, and provide remediation and optimization advice — all critical for effective IT Asset Management (ITAM). ⚙️🖥️</p><p>Excel and I were no strangers; we had spent countless hours together, dissecting data for hidden insights. But as I started importing the data, it became immediately clear that this task was going to push Excel — and me — to our limits. 😓</p><p>The dataset was not only colossal but also incredibly complex. It encompassed numerous data points, from procurement dates to warranty expiration, asset status to locations, and so much more. With millions of rows, multiple columns, complex formulas, and nested IF statements, my days became an endless loop of Excel, coffee, and a lot of head-scratching. 🔄☕️</p><p>The work was laborious and time-consuming, but more importantly, it was error-prone. One misplaced formula, a single incorrect range in a VLOOKUP, or an unintended sort could send hours of work into oblivion. It was a delicate, high-stakes balancing act.</p><p>One late night, after the umpteenth cup of coffee and yet another ‘Excel has stopped responding’ notification, I had an epiphany. There had to be a more efficient, less nerve-wracking way to carry out this mammoth task. Excel anxiety is a real thing. 😱📈</p><p>So I began my research and intuitively got sucked into the realm of programming languages used for data analysis. With some exploration, I discovered the power of Python and its libraries — Pandas, Numpy, and Matplotlib. I realized that Python could not only handle large datasets efficiently but also allowed for automating repetitive tasks, data cleaning, visualization, and even advanced predictive analysis with relative ease.</p><p>Deciding to dive in, I spent the next few months learning Python and the basics of these libraries. Then, I moved on to the main event — rewriting my Excel methodology into a Python script. 🐍📜</p><p>The transition was challenging, but I was committed to making it work. The script began to take shape, piece by piece. It started performing tasks that previously required manual intervention, such as identifying inconsistencies, merging related datasets, and calculating the lifespan of assets. I could write functions for specific operations, make the code reusable and reduce the risk of errors. 🔄🎯</p><p>The next time I did an internal review for my client, I delivered the analysis with the help of my Python script, and I was able to showcase not only the requested analysis but also dynamic visualizations. With Python’s data visualization libraries, I could create interactive graphs and charts, making the data easier to understand and the insights more tangible.</p><p>Looking back, transitioning from Excel to Python was a game-changer for my data analysis process at the time and for my career later on. This project had transformed from a struggle into a streamlined, efficient, and effective process. I could handle larger datasets, reduce the risk of errors, and automate repetitive tasks. In turn, it freed me up to focus on understanding the data and extracting meaningful insights.🔓🔥</p><p>The intersection of data analysis and IT Asset Management is one that is ripe with potential. It can provide actionable insights for decision-making, budget optimization, lifecycle management, and more. But it is crucial to choose the right tools for the task.</p><p>While Excel still remains one of my favorite data analysis tools, and it served me well in the early stages of my career, moving to a more programmatically focused approach opened up new possibilities, which were not only efficient but also more robust. And although this transition requires investment in learning, the return, as I can now affirm, is absolutely worthwhile. 🎉</p><p>From this experience, I’ve learned that the power of data analysis lies not only in the data itself but also in the tools we use and how we leverage them. It’s about choosing the right tool for the task and being open to evolving our methods as technology progresses. And that is the key to unlocking the true potential of data analysis. 💙</p><p>But more importantly, my curiosity for programming and for finding a better way of doing things eventually led me to join forces with two other dreamers (<a href="https://www.linkedin.com/in/cipriangrigore1/">Ciprian</a> &amp; <a href="https://www.linkedin.com/in/chrisallen-licenseware/">Chris</a>) and build the tool (or shall I say toolbox 😉 we always wanted to use when we were deep in the trenches, as consultants for our clients. <a href="https://www.linkedin.com/company/licenseware">Licenseware</a> is all that and more, and we strive to be successful in commoditizing ITAM tooling and making it available for anyone at any scale or maturity level.</p><p>Source: <a href="https://licenseware.io/from-spreadsheets-to-scripts-and-then-licenseware-%f0%9f%9a%80/">https://licenseware.io/from-spreadsheets-to-scripts-and-then-licenseware-%f0%9f%9a%80/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6d0fcfd233d8" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/from-spreadsheets-to-scripts-and-then-licenseware-6d0fcfd233d8">From Spreadsheets to Scripts (and then Licenseware) 🚀</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why and how we use MongoDB at Licenseware]]></title>
            <link>https://medium.com/licenseware/why-and-how-we-use-mongodb-at-licenseware-206b5eeb1a?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/206b5eeb1a</guid>
            <category><![CDATA[mongodb]]></category>
            <category><![CDATA[data-engineering]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Licenseware]]></dc:creator>
            <pubDate>Thu, 21 Sep 2023 09:19:42 GMT</pubDate>
            <atom:updated>2023-09-21T09:19:42.038Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ulhwlMbdkFW7sldJ5KifPg.png" /></figure><p>When it comes to selecting the right database layer for your application, MongoDB is undeniably a polarizing technology. It offers an accessible API, commendable performance, and the enticing prospect of eliminating the challenges associated with managing foreign keys, a common pain point in relational database systems. However, it does arrive with a substantial list of caveats and pitfalls if not wielded correctly, potentially resulting in a subpar developer experience and the costly ordeal of refactoring when transitioning to an RDBMS.</p><p>Our decision to adopt MongoDB was, at its core, similar to the path taken by many startups. We were in need of a robust database layer but had an aversion to the incessant schema changes that often accompany such systems. MongoDB provided a solution to this dilemma. While some may argue that ITAM data inherently exhibits a high degree of relationality, we discovered an innovative approach — storing related data within a single document, replete with nested fields. This allowed us to enjoy the best of both worlds.<br>Another compelling rationale for embracing a document-based database was our startup’s perpetual quest for agile data models, particularly in the realm of reporting.</p><p>Let’s delve into what we’ve found beneficial and the lessons we’ve gleaned from our MongoDB journey:</p><h3>1. Deliberate Updates</h3><p>Initially, we had high hopes of updating thousands of records simultaneously by relying on compound indices, like device and database name. However, this approach proved counterproductive, leading to painfully sluggish processing. We’ve since adopted two strategies, depending on the application. For some, we exclusively perform inserts and then filter the latest records using window functions, thereby preserving historical changes. For very large datasets, we opt for a delete-and-insert approach, significantly enhancing processing speed without overly complicating our application code.</p><h3>2. Magnificent Aggregation Pipelines</h3><p>Aggregation pipelines have emerged as our go-to tool for constructing report components. The range of possibilities is staggering, from straightforward grouping and filtering to intricate map-reduce operations. What sets MongoDB apart is the elegance of the code — a clean JSON document with a syntax that can only be described as beautiful. The Mongo Compass UI tool further simplifies the process, enabling the definition of each stage individually and offering real-time data transformation visualization. As a long-time SQL user, I find MongoDB aggregation pipelines easier to write and maintain. Notably, instead of storing raw SQL code in unwieldy strings, our queries are structured as Python dictionaries, facilitating syntax checking and direct referencing of variables and functions within the query.</p><h3>3. Navigating Document Size and Aggregation Stage Limits</h3><p>MongoDB imposes constraints on document size and aggregation stages, necessitating a methodical approach to data extraction and intelligent data modeling. Consider the $unwind aggregation stage, a valuable tool for dealing with nested data. However, when handling arrays with thousands of records, MongoDB promptly reminds us to reassess our grain level or reevaluate the necessity of retrieving the entire dataset for our query. The 16 MB document size limit, while seemingly sufficient, can be limiting, especially when the prevailing instinct is to consolidate everything into a single document to avoid joins. We’ve tackled these limitations through data model modifications, storing one document for each entity that would typically be nested, or exploring alternatives like DuckDB for storing and querying raw data. The key takeaway is to reserve our main MongoDB collections for data we genuinely require and routinely query.</p><h3>4. Data Duplication Over Joins</h3><p>Departing from the conventional wisdom of keeping data lean and normalized, we’ve embraced data duplication within MongoDB. While in the SQL world, a change from “Active” to “Enabled” in a “Statuses” table would require a single record update, in MongoDB, this necessitates modifying every record where the status equals “Active” to “Enabled.” It may seem cumbersome, but it aligns with our database optimization strategy geared toward expeditious read operations.</p><h3>5. Adapting to Loose Schema Validation</h3><p>MongoDB operates in a realm where strict schema validation takes a backseat. Here, you have the flexibility to define and evolve your data structures with a degree of freedom that might be unfamiliar to those accustomed to rigid relational databases. This liberty has its pros and cons.<br>Rather than relying on an Object-Relational Mapping (ORM) tool to enforce schema conformity, we’ve chosen to take control of data validation within our application code. It’s a conscious choice, one that places the responsibility squarely on our shoulders — and those of our developers — to ensure that data models align with our expectations.</p><p>To streamline this process, we use libraries like Marshmallow and Pydantic. These Python libraries make defining and validating data models a breeze. With them, we can define the structure of our data, set constraints, and validate incoming data before it ever touches the database. This approach ensures data integrity while affording us the flexibility to adapt our schemas as needed.</p><h3>6. Prudent Use of Database Migrations</h3><p>Database migrations are a familiar concept in the world of relational databases, where changes to the database schema necessitate careful planning and execution. While MongoDB takes a different approach to schema management, we’ve found a unique use case for database migrations in our MongoDB ecosystem. Rather than employing migrations to tweak database schemas, as is the norm in traditional databases, we’ve repurposed this tool to orchestrate changes within the data itself. This unconventional approach has proven valuable in several scenarios.<br>For instance, when a fundamental change in data structure is required, we turn to database migrations. These migrations serve as a mechanism to update and transform existing data to align with the new schema. It’s a way to ensure a smooth transition without compromising data integrity or causing data loss.<br>Additionally, database migrations become indispensable when we need to apply specific data transformations or updates across a large dataset. Whether it’s adjusting data formats, recalculating values, or reorganizing documents, migrations provide a structured and controlled means to enact these changes.</p><p>By adapting these unconventional practices to suit our MongoDB setup, we’ve found innovative ways to maintain data integrity and agility within our database, aligning it with the unique demands of our application.</p><p>In the realm of databases, the choice is laden with trade-offs, contingent on the specific application’s purpose and data characteristics. MongoDB may not be the ideal fit for an ERP system where changes in one entity have widespread ripple effects or for a system demanding stringent schema control. Our affinity for MongoDB is deeply intertwined with the nature of our application — data processing in all its complexity. Our data processors continuously adapt to evolving customer requirements, and our system’s prowess lies in our nimbleness. Consequently, duplicating the same attribute across thousands of records is a minor trade-off when it translates into report components loading in the blink of an eye. MongoDB may evoke mixed sentiments, but within the context of our operation, it’s the secret ingredient that elevates our application’s performance.</p><p>Source: <a href="https://licenseware.io/why-and-how-we-use-mongodb-at-licenseware/">https://licenseware.io/why-and-how-we-use-mongodb-at-licenseware/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=206b5eeb1a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/why-and-how-we-use-mongodb-at-licenseware-206b5eeb1a">Why and how we use MongoDB at Licenseware</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Monitoring at Licenseware: The Power of Slack Channels and Real-Time Collaboration]]></title>
            <link>https://medium.com/licenseware/monitoring-at-licenseware-the-power-of-slack-channels-and-real-time-collaboration-f474183671a9?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/f474183671a9</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[monitoring]]></category>
            <category><![CDATA[logs]]></category>
            <category><![CDATA[slack]]></category>
            <dc:creator><![CDATA[Licenseware]]></dc:creator>
            <pubDate>Thu, 21 Sep 2023 09:16:30 GMT</pubDate>
            <atom:updated>2023-09-21T09:16:30.488Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3049qR1XrRbtzBk5EfcOig.png" /></figure><p>In the fast-paced world of Software-as-a-Service (SaaS), monitoring your systems is not just an afterthought; it’s a necessity. At Licenseware, our mission is to commoditize IT asset management (ITAM) by making top-notch tooling available for companies of all scales and budgets. But how do we keep tabs on our systems to make sure we are providing the best possible service to our clients? We do it with a blend of conventional monitoring tools and innovative, real-time collaboration.</p><p>While we use industry-standard tools like Grafana and OpenTelemetry, one game-changer has been incorporating Slack channels to forward logs from our systems. This real-time integration has not only streamlined our monitoring processes but also enhanced our team’s efficiency.</p><h3>Why Slack?</h3><p>You might wonder, “Why Slack? Aren’t there dedicated tools designed explicitly for monitoring?” While it’s true that there are specialized tools for this purpose, what sets Slack apart for us is its ease of integration and the ability to foster real-time collaboration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/569/0*Ws_1kfAXcLd4bTyQ.jpeg" /></figure><h3>Real-Time Discussion and Troubleshooting</h3><p>By having logs forwarded to specific Slack channels, our team can immediately see what’s happening and start discussing potential issues. This real-time troubleshooting ability significantly reduces the time it takes to identify and solve problems.</p><h3>Clear Archive of Actions and Decisions</h3><p>Whenever there’s a discussion about a log or an alert, it leaves a clear archive in the Slack channel. This archived data can be incredibly useful later for Root Cause Analysis (RCA) or generating bug tickets. It ensures that there’s an auditable trail of what was identified and what actions were taken.</p><h3>Ease of Use and Accessibility</h3><p>The vast majority of tech teams are already using Slack for internal communication. By bringing monitoring into the same platform that everyone is already comfortable with, we simplify the workflow, making it easier for everyone to get involved when necessary.</p><h3>Open Source for the Community</h3><p>We believe in sharing what we’ve learned and making it easier for other teams to adopt effective practices. To that end, we have made our Slack monitoring package open source. It’s a simple, effective way to replicate our Slack-based monitoring approach in your environment, irrespective of the scale or complexity.</p><h3>How to Get Started</h3><p>Interested in implementing this in your organization? You can find the open-source package on our <a href="https://github.com/licenseware/licenseware-logblocks">GitHub repository</a>. It comes with clear instructions on how to set it up, but if you have any questions, feel free to reach out.</p><h3>Conclusion</h3><p>Monitoring is critical in any SaaS business, but it doesn’t have to be cumbersome or isolated. By incorporating real-time collaboration tools like Slack into your monitoring strategy, you can significantly improve both the speed and efficiency of your troubleshooting efforts. At Licenseware, it has made our lives easier, and we believe it can do the same for you.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f474183671a9" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/monitoring-at-licenseware-the-power-of-slack-channels-and-real-time-collaboration-f474183671a9">Monitoring at Licenseware: The Power of Slack Channels and Real-Time Collaboration</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What is a software license audit?]]></title>
            <link>https://medium.com/licenseware/what-is-a-software-license-audit-1e1d97de781f?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/1e1d97de781f</guid>
            <category><![CDATA[software-licensing]]></category>
            <category><![CDATA[software-audit]]></category>
            <category><![CDATA[it-asset-management]]></category>
            <category><![CDATA[software]]></category>
            <dc:creator><![CDATA[Licenseware]]></dc:creator>
            <pubDate>Mon, 23 Jan 2023 23:32:31 GMT</pubDate>
            <atom:updated>2023-01-23T23:32:31.302Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cMssYHmJEEINa4_RWIJpjA.png" /></figure><p>Software license audits are a process by which organizations review and evaluate their software usage to ensure compliance with the terms and conditions of the software licenses they have acquired. This can include identifying which software applications are installed, determining how many licenses have been purchased and are in use, and identifying any potential overuse or underuse of licenses.</p><p>One of the main risks associated with software license audits is the potential for organizations to be non-compliant with the terms and conditions of their software licenses. This can lead to legal and financial consequences, such as fines and penalties, as well as the potential for vendor-initiated audits that can be costly and time-consuming.</p><p>Another risk is the potential for organizations to unknowingly use software that is not licensed, or to use software in a way that is not covered by their current license. This can lead to unexpected costs and legal issues.</p><h3>But… why? (i.e. What’s the reasoning behind it?)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-B7Dqz6_Zw6JQKX5hJId6w.png" /></figure><p>The reasoning behind software license audits is to ensure that organizations are using software in compliance with the terms and conditions of their licenses. Software vendors rely on the revenue generated from licensing their software to sustain their business, and therefore, it is essential that they protect their intellectual property and ensure that their customers are using the software in accordance with the terms of the license agreement.</p><p>However, the relationship between software vendors and users has become increasingly fraught over the years, with both sides expressing mistrust and dissatisfaction with each other.</p><p>On one hand, end user companies have complained about aggressive licensing practices and audits by vendors, which they believe are designed to extract more revenue from them. This can include unexpected costs for additional licenses, audits that are costly and time-consuming, and penalties for non-compliance.</p><p>On the other hand, software vendors have expressed frustration with the high rate of non-compliance among their customers. They believe that many users are using software without proper licenses, or using the software in ways that are not covered by their licenses. This can lead to significant losses in revenue for the vendors, and it can also undermine the value of their intellectual property.</p><p>Both users and vendors have lost trust in each other because of these issues. Users feel like they are being taken advantage of, while vendors feel like they are not being compensated fairly for their products.</p><h3>What should you do?</h3><p>To mitigate this situation and minimise, organisations should focus on effective software asset management (SAM) and regular internal license audits. This can help organizations identify and resolve compliance issues, and it can also help vendors ensure that their customers are using their software in accordance with the terms of the license agreement. Additionally, vendors should communicate more clearly about license terms, and avoid aggressive tactics and penalties for non-compliance.</p><p>The good news is that the industry have evolved tremendously in the past 10 years and companies have plenty of choice when it comes to <a href="https://licenseware.io/">SAM tools</a> or SAM services.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1e1d97de781f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/what-is-a-software-license-audit-1e1d97de781f">What is a software license audit?</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What is the AI Act, and why should you care about it?]]></title>
            <link>https://medium.com/licenseware/what-is-the-ai-act-and-why-should-you-care-about-it-dc6ca237bf2f?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/dc6ca237bf2f</guid>
            <category><![CDATA[eu-ai-act]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ai-engineer]]></category>
            <category><![CDATA[ai-ethics]]></category>
            <category><![CDATA[ai-education]]></category>
            <dc:creator><![CDATA[Licenseware]]></dc:creator>
            <pubDate>Wed, 28 Dec 2022 09:03:42 GMT</pubDate>
            <atom:updated>2022-12-28T09:03:42.219Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oHfpZMqeSd3k68galw7chg.png" /></figure><p>Last week I was invited, along with several other Romanian startups, to the EU Parliament to hear about the new compliance regulations for high-risk AI, along with other topics like the regulatory sandboxes (more on that in this post), the Data Act and other ways in which the EU is addressing the fast-paced technology innovations affecting all our lives.</p><p>When people think of AI, there are generally two opposite poles, those who imagine a dystopic future where machines have taken over and those who envision human-like robotic butlers and self-driving hoverboards. And while our phones are still struggling to distinguish between a cat and a dog, the world of AI is big enough to encompass both the dangers of the first and the opportunity for the latter.</p><h3>Why do we need regulations?</h3><p>As any data scientist will tell you, your model is only as good as your training data. We live in a world full of biases, and it’s hard to explain to a machine that although historically, humanity has been horrible at things like respecting people’s rights, we don’t want to propagate that behavior.</p><p>Using raw, unbalanced, but “accurate” data on a model used for profiling criminals would be the tech equivalent of saying, “stereotypes are stereotypes for a reason, so I’ll just judge you based on them”.</p><p>The world’s data is biased because humanity has a very dark past. Just like in the sci-fi movies where the hero must convince the aliens or the death robots that humans are more than just their history, we also need to train our models to understand that just because something happened in the past, it may not always be the correct answer for the future. Suppose we, as empathic human beings, sometimes have difficulty identifying when our judgment is biased. How can we expect an algorithm to perform better if it only relies on historical data?</p><p>And that’s why we need regulations because the world is biased, and our models will be too. And it’s up to regulating authorities to ensure that one person is not judged by the actions of a broader, unbalanced, and unregulated group.</p><p>Furthermore, AI applications are weaved into our everyday life, from self-driving cars and financial applications to intelligent refrigerators, their impact can be felt many times as an improvement, and it’s already a thing of the present that some applications present real dangers to both their users and the people around.</p><h3>The AI Act</h3><p>Although it hasn’t yet been released in its final form, the AI Act distinguishes between high and low-risk applications and plans to introduce compliance rules for those classified as high-risk. Think electric cars, law enforcement profiling, and medical applications, to name a few.</p><p>Once released, companies selling AI-driven products categorized as high-risk will need to comply with EU regulations. The regulations and certification process are not yet published, however here are some of the already known objectives:<br>– ensure that AI systems are safe and respect existing laws on fundamental rights and Union values<br>– data protection, consumer protection, non-discrimination, and gender equality are respected<br>– systems used as safety components of various products are thoroughly tested<br>– the regulatory measures will create a level playing field where small companies can innovate and compete with large corporations<br>– regulated but open access to data for all developers<br>– the compliance rules are only aimed at applications classified as high risk</p><p>Once the regulations are enacted, existing products will have two years to comply.</p><p>A critical aspect of the AI act is represented by the regulatory sandboxes. Systems put in place at the national level, based on the EU’s specifications, will provide developers with the required datasets and tools for self-certification. It’s not very clear what the sandboxes will look like. Because they will be implemented by each member country individually, there is a chance we will see a lot of variation in both their requirements and data, not to mention the speed with which these will be made available. We can also expect to see private companies building and providing sandboxes as a service, which would be beneficial from a speed perspective. However, the cost of accessing such a service may become a barrier for small developers.</p><p>In conclusion, the AI Act is a welcomed piece of legislation that will provide some order in an environment that already has made victims, and which will hopefully not hinder innovation but rather ensure that technology is used to improve people’s lives.</p><p>Source: <a href="https://licenseware.io/what-is-the-ai-act-and-why-should-you-care-about-it/">https://licenseware.io/what-is-the-ai-act-and-why-should-you-care-about-it/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dc6ca237bf2f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/what-is-the-ai-act-and-why-should-you-care-about-it-dc6ca237bf2f">What is the AI Act, and why should you care about it?</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The 10x ITAM Manager]]></title>
            <link>https://medium.com/licenseware/in-the-software-engineering-industry-theres-a-myth-that-talks-about-a-special-kind-of-engineer-b6c500317fe9?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/b6c500317fe9</guid>
            <category><![CDATA[it-asset-management]]></category>
            <category><![CDATA[10x-engineer]]></category>
            <dc:creator><![CDATA[Licenseware]]></dc:creator>
            <pubDate>Fri, 04 Nov 2022 12:01:22 GMT</pubDate>
            <atom:updated>2022-12-28T08:52:50.661Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SxJoy5M81VJ1uG5KWMDjuw.png" /></figure><p>In the software engineering industry, there’s a myth that talks about a special kind of engineer. “The 10x Engineer” is a ten times more effective software engineer than the average engineer.</p><p>There’s <a href="https://medium.com/ingeniouslysimple/the-origins-of-the-10x-developer-2e0177ecef60">a bit of history to it</a>. Since 1977, Tom DeMarco and Tim Lister have conducted a public productivity survey called “Coding War Games”. Teams of software development professionals from different organizations compete to complete a series of benchmarks in minimal time with minimal defects. They’ve had over 600 developers participate.</p><p>What they found was that there was a significant disparity between different individuals on the team. Some members could barely carry their weight, and others were carrying the team on their shoulders.</p><p>For as long as we’ve been in software, there’s been talk of The 10x Engineer. These are the people you want to solve your problems.</p><p>Since we’ve been dwelling in the software engineering world almost as much as in the ITAM world, we sometimes borrow exciting concepts. We like to adapt these concepts to what we do in a meaningful way.</p><h3>So why do we think the 10x concept is relevant in ITAM?</h3><p>Several challenges make the 10x concept relevant and a practical mindset for any ITAM pro.</p><h3>1. Good talent is costly and scarce</h3><p>Compared with software engineers, one compelling observation is that senior ITAM Managers (or License Consultants) charge more per hour. In perspective, Software Engineers are not necessarily known for their low rates.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/699/0*9u9v9Nq4f6c9RuKY.png" /></figure><h3>2. Premium tools are pricy and require high commitment</h3><p>Premium ITAM tools are notoriously known for their high price tags and lengthy implementation cycles, which strain internal resources. While software is consumed as a commodity, most ITAM tools are not.</p><p>Many failed ITAM programs have one of these two essential ingredients compromised. Either not having the right talent, tooling is a problem, or both.</p><h3>3. ITAM 1.0 &gt;&gt;&gt; ITAM 2.0</h3><p>In recent years there’s been a transition happening. Technology is finally catching up with ITAM, and we see the industry moving into a new era. While new possibilities arise, a new set of updated abilities is needed to take advantage of the opportunity fully. And, more importantly, a new kind of mindset.</p><p>We like to call it <strong>ITAM 2.0</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/942/0*J2nwC-KrIlHGha9j.png" /></figure><h3>What would the 10x ITAM Manager look like?</h3><p>Now that we’ve established that a concept like this is practical, it would be interesting to think about what would make an ITAM Manager 10x.</p><p>Throughout our time in ITAM and based on the interviews we had with fellow asset managers, four main attributes keep coming up (we’ll call them sides):</p><ul><li>inquisitive side</li><li>technical side</li><li>governance side</li><li>political side</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*rrUm7IdYXZzm-eRE.png" /></figure><h3>The inquisitive side</h3><p>ITAM pros are inquisitive in nature. Being inquisitive keeps them passionate and engaged with the subject matter. They know stuff like the history of licensing and why it’s important to society. They know that because they are curious about things like that and genuinely like their job.</p><p>They understand how vendors can influence the market and the logic behind licensing rules. They have a solid holistic understanding which helps them quickly pick up specific scenarios and address them effectively.</p><h3>The governance side</h3><p>Good IT leaders understand and can frame the problem accordingly to the situation, whether it’s a low-level discussion with the technical staff or a C-level conversation. They can design efficient processes and policies and implement/iterate them in an agile fashion.</p><h3>The technical side</h3><p>While traditionally, many ITAM programs have been carried out and organized in spreadsheets, this is no longer the status quo. The next-generation ITAM stack requires data skills beyond Excel. Understanding data engineering, ETL pipelines, and how to leverage data from various systems in novel ways is a huge plus and a significant asset to the organization.</p><p>The next-gen ITAM pros deeply understand IT and its role in the organization. They know that automation is a friend and it’s not there to replace them. They leverage automation in ways that complement their abilities, making computers do the tedious analysis work and allowing them to focus on more complex or abstract tasks that machines can’t automate yet.</p><h3>The political side</h3><p>Everyone who’s been in the industry long enough knows that you need to exercise your political muscle to get things done. Great IT leaders have strong professional networks. They are constantly surrounded by specialists and knowledgeable people that can help them with advice or connections.</p><p>Influential leaders know how to talk the talk, but also walk the walk. We are as good as our word in this industry, and better make things happen. The better we are at effectively describing the problems and opportunities and how they directly affect different stakeholders, the higher the chance of getting their support. But to lock in that trust and create those relationships, we have to deliver on our promises, be that a successful process or cost optimization or an end-to-end SAM program, etc.</p><h3>How do we make the most of 10x ITAM Managers?</h3><p>From a leadership point of view, it’s essential to identify the skill gaps and blockers in the team, and train periodically.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Od_kRuXSj3OqnBLm.png" /></figure><p>Recognize what kind of staff you have and recognize the difference between a specialist and a generalist. And when to leverage them individually or together.</p><p>A generalist might a good starting point for a new team, while a specialist is ideal for a vendor audit defence case. Learn to know the difference and leverage them in their element.</p><p>Give the right level of autonomy and accountability. But let them explore new ways of solving problems and try new tech.</p><p>To set them up for success, you must develop them to be future leaders. Mentor them until they can mentor others.</p><h3>Stay relevant</h3><p>Looking toward the future, we see all these buzzwords cropping up. Blockchain, FinOps, IoT (becoming mainstream), container licensing, and autonomous devices. Today we’re talking about the transition to ITAM 2.0. Not so long from now, it’s 3.0.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1014/0*C6ygIB3SWLzzBWCX.png" /></figure><p>Above all, the 10x ITAM Manager is a mindset that enables you and your organization to remain relevant in the fast pace world of IT.</p><p>Stay relevant, folks.</p><p>Source: <a href="https://licenseware.io/the-10x-itam-manager/">https://licenseware.io/the-10x-itam-manager/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b6c500317fe9" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/in-the-software-engineering-industry-theres-a-myth-that-talks-about-a-special-kind-of-engineer-b6c500317fe9">The 10x ITAM Manager</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Stop Committing Configurations to your Source Code]]></title>
            <link>https://medium.com/licenseware/stop-committing-configurations-to-your-source-code-fb37be351492?source=rss----7cb685824622---4</link>
            <guid isPermaLink="false">https://medium.com/p/fb37be351492</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Meysam]]></dc:creator>
            <pubDate>Tue, 29 Mar 2022 14:30:53 GMT</pubDate>
            <atom:updated>2022-03-29T14:30:53.548Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mZnX66p5DvsHbzrnMeKh1w.png" /></figure><h3>Intro</h3><p>Over the last decade or so, due to the technological advancement in operations and tools such as CI/CD, containers, IaaS, etc., more and more people (or software engineers we shall say) are familiar with the operation part of the business (yay!🥳).</p><p>This has made it easier and easier to talk about things such as configurations to the developer team and the way they employ it in their code (Take a look at <a href="https://12factor.net/config">the 12-factor app</a> if you don’t know what I’m talking about).</p><p>Though there is still a long journey ahead of us, we have come a long way.</p><p>In this article, I’m trying to propose a common problem, namely configuration, and provide a proper solution on how to address that.</p><h3>So, what is it anyway? And who cares?</h3><p>When it comes to configuration, each developer has its preference on how to read some values that might change based on different deployments/environments.</p><p>Some might use the .env file; I know for a fact that the JavaScript guys are more interested in having .env.ENV_NAME in their source code.</p><p>Other people would go for other names; I’ve seen .envlocal , .env.debug , .env-test to mention a few.</p><p>What’s the worst part about all of this? It’s that it’s wrong right to its bone. The configuration of an app should never be committed to the source code. To quote the 12-factor guys:</p><blockquote>A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials.</blockquote><p>The dev guys should only have some examples of the required and optional configuration that their app needs; some files similar to this would be nice: .env-example.</p><p>Ideally, the content of such a file would be something like this:</p><pre>ENVIRONMENT=CHOOSE_FROM_TEST_DEV_PROD<br>REDIS_URI=CHANGE_THIS<br>MONGO_URI=CHANGE_THIS</pre><p>And so on.</p><figure><img alt="Woman working at the office and staring at her computer" src="https://cdn-images-1.medium.com/max/1024/1*904Fxn2XJtBrit17viAjAg.jpeg" /><figcaption>Photo by <a href="https://www.pexels.com/@thisisengineering?utm_content=attributionCopyText&amp;utm_medium=referral&amp;utm_source=pexels">ThisIsEngineering</a> from <a href="https://www.pexels.com/photo/woman-coding-on-computer-3861958/?utm_content=attributionCopyText&amp;utm_medium=referral&amp;utm_source=pexels">Pexels</a></figcaption></figure><p>You should NEVER, I repeat, NEVER commit the configuration to either the source code or the image of your app (I’m talking about Docker image but any similar concept applies the same).</p><p>As a DevOps Engineer, I’m responsible for providing those values (whether required or optional) to your application and when and where I forget to do so, your app should (or better, must) fail and complain about the missing value; something similar to this perhaps right before the runtime:</p><pre>ValidationError: 1 validation error for Config<br>MONGO_URI<br>  field required (type=value_error.missing)</pre><p>Apart from the biggest mistake I’ve seen people make when committing the env file to the source code, I’ve also witnessed people putting the env file to the image of the app; I’m saying this loud and clear so that everyone can hear: IT IS WRONG!</p><p>Never place your env file inside the image of your app because it makes the behavior of the app nondeterministic.</p><p>Any person (with the right amount of access privilege) should be able to run your application with as many instances and as many different configurations as he/she desires without the need to tweak some file in your image or do some other weird stuff to overwrite a, let’s say, MongoDB URI.</p><p>That means, with the identical source code, I, as an operation guy, should be able to run your application on either or all of the environments I see fit e.g. testing, staging, development, production, etc.</p><p>Again, let me quote the 12-factor guys:</p><blockquote>A <a href="https://12factor.net/codebase">codebase</a> is transformed into a (non-development) deploy through three stages:</blockquote><blockquote>1. The <em>build stage</em> is a transform which converts a code repo into an executable bundle known as a <em>build</em>. Using a version of the code at a commit specified by the deployment process, the build stage fetches vendors <a href="https://12factor.net/dependencies">dependencies</a> and compiles binaries and assets.</blockquote><blockquote>2. The <em>release stage</em> takes the build produced by the build stage and combines it with the deploy’s current <a href="https://12factor.net/config">config</a>. The resulting <em>release</em> contains both the build and the config and is ready for immediate execution in the execution environment.</blockquote><blockquote>3. The <em>run stage</em> (also known as “runtime”) runs the app in the execution environment, by launching some set of the app’s <a href="https://12factor.net/processes">processes</a> against a selected release.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YZk03C2t5XEYMpaRzfORlg.jpeg" /><figcaption>Photo by <a href="https://www.pexels.com/@thisisengineering?utm_content=attributionCopyText&amp;utm_medium=referral&amp;utm_source=pexels">ThisIsEngineering</a> from <a href="https://www.pexels.com/photo/woman-coding-on-computer-3861958/?utm_content=attributionCopyText&amp;utm_medium=referral&amp;utm_source=pexels">Pexels</a></figcaption></figure><p>As a final touch, now that I have outlined the problem clearly, I plan to provide [an opinionated] solution.</p><p>As I am a Python engineer, I’m gonna talk about a library that I adore, admire and support (both spiritually and financially) on the Python ecosystem but you won’t get into trouble finding the equivalent in your language.</p><p>Lo and behold Pydantic 🥁</p><p>Pydantic is my personal preference when it comes to validation. But aside from all the cool features, it provides for a robust production application, it also comes with a Settings API which you can employ in your app to <strong>avoid having to read configurations from multiple places</strong> and therefore confusing both yourself and the operations team.</p><p>Before diving right into the code, let us review the exact requirement one more time, just to make sure that we realize what we are trying to solve here:</p><blockquote>Any DevOps guy has to be able to run the same app as many times as he/she desires, on the same machine or many, with different sets of configurations (or environmental variables).</blockquote><p>So, enough talking; “talk is cheap, show me the code” 😍</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b086702103c877241243aa2cc46d2f2a/href">https://medium.com/media/b086702103c877241243aa2cc46d2f2a/href</a></iframe><p>The language is Python and the syntax is pretty straightforward so you won’t have much trouble picking up what is going on; therefore, there is no point in me wasting words and your time around it!</p><p>You can easily run this file to make sure that the promise is held (python3 test_pydantic.py). Using this style for your settings, any kind of operations is possible with different sets of values provided as the config for your app.</p><p>If you are interested to know more, head out to the other article that summarized the 12-factor app shortly and sweetly below 😁.</p><p><a href="https://medium.com/licenseware/12-factor-app-for-dummies-d905d894d9f8">12-Factor App For Dummies</a></p><p>Also, if you are interested to read more, here are a couple of selected and most-read articles written previously.</p><ul><li><a href="https://medium.com/geekculture/patch-your-dependencies-like-a-boss-de757367010f">Patch Your Dependencies Like a Boss</a></li><li><a href="https://medium.com/amerandish/clean-architecture-simplified-223f45e1a10">Clean Architecture Simplified</a></li><li><a href="https://medium.com/skilluped/10-tips-on-writing-a-proper-dockerfile-13956ceb435f">10 Tips on Writing a Proper Dockerfile</a></li><li><a href="https://medium.com/skilluped/stop-writing-mediocre-docker-compose-files-26b7b4c9bd14">Stop Writing Mediocre Docker-Compose Files</a></li><li><a href="https://medium.com/skilluped/what-is-iptables-and-how-to-use-it-781818422e52">What Is iptables and How to Use It?</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fb37be351492" width="1" height="1" alt=""><hr><p><a href="https://medium.com/licenseware/stop-committing-configurations-to-your-source-code-fb37be351492">Stop Committing Configurations to your Source Code</a> was originally published in <a href="https://medium.com/licenseware">Licenseware</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>