<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by anynines on Medium]]></title>
        <description><![CDATA[Stories by anynines on Medium]]></description>
        <link>https://medium.com/@anynines?source=rss-a9189043b9c1------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 16:11:12 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@anynines/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Enhancing Dev Productivity with a9s CLI for PostgreSQL on Kubernetes]]></title>
            <link>https://anynines.medium.com/enhancing-dev-productivity-with-a9s-cli-for-postgresql-on-kubernetes-843632ede012?source=rss-a9189043b9c1------2</link>
            <guid isPermaLink="false">https://medium.com/p/843632ede012</guid>
            <category><![CDATA[database-management]]></category>
            <category><![CDATA[postgres]]></category>
            <category><![CDATA[postgresql]]></category>
            <category><![CDATA[cloud-automation]]></category>
            <category><![CDATA[kubernetes]]></category>
            <dc:creator><![CDATA[anynines]]></dc:creator>
            <pubDate>Wed, 15 May 2024 06:15:30 GMT</pubDate>
            <atom:updated>2024-05-15T06:15:30.503Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vxZEEUy9_2TG2GISuPsrmQ.png" /></figure><p>The tools development teams use to manage databases in cloud-native environments must keep up with updates in the industry while enhancing productivity and the developer experience. <a href="https://anynines.com"><strong>anynines</strong></a>, a prominent European pioneer in cloud-native solutions, has recently launched the a9s CLI, a free command-line interface tool designed to streamline PostgreSQL management in local Kubernetes clusters using Minikube or Kind. This new tool not only simplifies database setup and management but also significantly enhances developer productivity by integrating seamlessly with Kubernetes’ powerful orchestration capabilities. The CLI is free for noncommercial use, but can be purchased for business usage.</p><p>The <a href="https://github.com/anynines/a9s-cli-v2?utm_source=media&amp;utm_medium=medium&amp;utm_campaign=a9s-cli-git">a9s CLI</a> is designed to enhance community engagement and accessibility by simplifying the process of setting up a local development environment with Postgres. By utilizing Minikube or Kind, the CLI automatically installs the a8s Postgres operator, setting up a clustered Postgres instance with streaming replication and automatic failover for users.</p><h3>Simplifying Setup with a9s CLI</h3><p>The a9s CLI offers a swift and straightforward setup process. By leveraging the <a href="https://github.com/anynines/a9s-cli-v2?utm_source=media&amp;utm_medium=medium&amp;utm_campaign=postgres">a8s PostgreSQL operator</a> specifically designed for Kubernetes, developers can deploy PostgreSQL instances as statefulsets with just a single command. This eliminates the complexities and potential errors associated with manual configurations, allowing developers to focus more on development and less on setup. The simple installation process is a game-changer for developers who are looking to rapidly deploy and test applications, reducing the time from conception to deployment.</p><h3>Configuration Management Made Easy</h3><p>Managing configurations in a dynamic development environment can be cumbersome and error-prone. The a9s CLI addresses this challenge by automating the lifecycle management of PostgreSQL databases. It ensures that all instances are deployed with the correct configurations and kept updated without manual intervention. This level of automation minimizes the risk of human errors while also ensuring consistency across different development and production environments.</p><h3>Updates and Scaling</h3><p>One of the key features of a8s PostgreSQL is its ability to scale PostgreSQL instances up or down, in or out based on workload demands. This automatic scaling helps maintain optimal performance and efficient resource utilization without any downtime. Furthermore, the a9s CLI simplifies the update process, allowing developers to apply the latest patches and updates to PostgreSQL instances with minimal effort. This ensures that the databases are always running the most secure and stable versions, supporting both the scalability and reliability requirements of modern applications.</p><h3>High Availability and Integrated Monitoring</h3><p>High availability is critical for maintaining the reliability of databases in production environments. Patroni enhances the high availability of a8s PostgreSQL by bringing in built-in automatic failover, effective replication management, and continuous health monitoring, making it a solid choice for managing critical database operations in a distributed environment. It automatically handles failover to a standby server in case the primary server fails, ensuring minimal downtime and maintains data availability. It manages the configuration of multiple PostgreSQL instances as replicas. This replication process ensures that data is copied across different servers, which helps in maintaining data integrity and availability even if one server goes down.</p><p>Furthermore, the a9s CLI, through the a8s PostgreSQL operator, utilizes Patroni for streaming replication and automatic failover, as well as Kubernetes’ built-in features for automatic failover and redundancy. This ensures that PostgreSQL databases are continuously available across different availability zones, providing peace of mind in case of hardware failures or other issues.</p><p>Separately available integration with popular monitoring solutions allows developers to keep a close eye on the health and performance of their PostgreSQL instances. Monitoring tools provide valuable insights that can be used to optimize performance and anticipate potential issues before they become critical. We also offer integration with popular logging solutions to help developers collect and visualize logs from provisioned data service instances.</p><h3>5 Quick Steps: Installing and Using the a9s CLI for Fast PostgreSQL Deployment and Management in Kubernetes</h3><p>Managing PostgreSQL in Kubernetes environments can be complex, but with the a9s CLI and a8s PostgreSQL, it becomes a breeze. Here’s a quick step guide on how to install and use the a9s CLI to manage PostgreSQL, empowering developers to focus more on their applications and less on database management. Follow the <a href="https://docs.a9s-cli.anynines.com/docs/hands-on-tutorials/hands-on-tutorial-a8s-pg-a9s-cli/">a9s CLI tutorial for an in-depth walkthrough</a>.</p><h3>Step 1: Create a Local Kubernetes Cluster</h3><p>Before you can deploy PostgreSQL, you’ll need a Kubernetes cluster. If you don’t have one already, the CLI can create a local Kubernetes cluster using tools like Minikube or Kind for you. These tools allow you to run a Kubernetes cluster locally on your machine, providing a perfect environment for development and testing. With the a9s CLI, developers don’t need to manually deploy a local Kubernetes cluster; it will automatically utilize these tools to deploy a local cluster.</p><h3>Step 2: Install the a8s PostgreSQL Operator</h3><p>Once your Kubernetes cluster is up and running, the next step is to install the a8s PostgreSQL operator. This operator is designed specifically to manage PostgreSQL instances within Kubernetes, providing essential features such as lifecycle management, configuration, and security.</p><h3>Step 3: Deploy a PostgreSQL Cluster</h3><p>With the operator in place, you can now deploy a PostgreSQL cluster. This step involves setting up a cluster with 3 Kubernetes Pods to ensure high availability. The a8s PostgreSQL operator includes built-in features like asynchronous streaming replication, automatic failure detection, leader election, and automatic failover.</p><h3>Step 4: Launch a Demo Application and Service Bindings</h3><p>To see the a9s CLI and PostgreSQL cluster in action, you can launch a demo application. The ServiceBindings feature of Kubernetes can be utilized to easily create Kubernetes Secrets, which store database credentials securely and allow your applications to access the database without hardcoding sensitive information.</p><p>This will deploy a simple application and automatically bind it to your PostgreSQL cluster using Kubernetes Secrets.</p><h3>Step 5: Perform Backups and Restores</h3><p>Finally, maintaining regular backups and having the ability to restore from them is crucial for data integrity. The a9s CLI facilitates easy backup and restore operations, ensuring that your data can be safely stored and recovered in case of any disaster.</p><p>By following these five steps, you can successfully install and utilize the a9s CLI for efficient PostgreSQL management in your local Kubernetes environment. The a9s CLI simplifies the entire process, from installation to backups, allowing developers to deploy robust, scalable PostgreSQL clusters with minimal hassle. With this tool, you’re well-equipped to handle database management tasks efficiently, giving you more time to focus on building your applications.</p><h3>Get Started with a9s CLI</h3><p>The a9s CLI from anynines is a comprehensive solution that addresses several critical aspects of database management in Kubernetes environments that enables easy management of PostgreSQL. By simplifying installation, automating scaling and updates, ensuring high availability, and integrating with monitoring systems, the a9s CLI empowers developers and DevOps teams to enhance their productivity significantly. With this tool, anynines shows their support of the development community as well as reinforcing their commitment to improving operational efficiency and developer experiences in cloud-native architectures.</p><p>For those interested in leveraging this powerful tool, you can <a href="https://github.com/anynines/a9s-cli-v2"><strong>download a9s CLI</strong></a>, view the <a href="https://k8s.anynines.com/for-postgres/clkn/https/docs.a9s-cli.anynines.com/docs/hands-on-tutorials/hands-on-tutorial-a8s-pg-a9s-cli/"><strong>setup guide</strong></a>, or learn more about <a href="https://k8s.anynines.com/for-postgres/"><strong>a8s PostgreSQL</strong></a>. Whether you’re a developer, hobbyist, student, or part of a startup or small organization, the free download of a9s CLI is tailored to help you manage your PostgreSQL databases effortlessly within your Kubernetes clusters, Enterprises can reach out to use a8s PostgreSQL commercially.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=843632ede012" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[MongoDB — What Is It and When Should I Use It? | anynines blog]]></title>
            <link>https://anynines.medium.com/mongodb-what-is-it-and-when-should-i-use-it-anynines-blog-2427621d908d?source=rss-a9189043b9c1------2</link>
            <guid isPermaLink="false">https://medium.com/p/2427621d908d</guid>
            <category><![CDATA[nosql]]></category>
            <category><![CDATA[sql]]></category>
            <category><![CDATA[mongodb]]></category>
            <category><![CDATA[database]]></category>
            <category><![CDATA[cloud-native]]></category>
            <dc:creator><![CDATA[anynines]]></dc:creator>
            <pubDate>Mon, 15 Mar 2021 22:26:00 GMT</pubDate>
            <atom:updated>2021-04-16T10:47:50.569Z</atom:updated>
            <content:encoded><![CDATA[<h3>MongoDB — What Is It and When Should I Use It? | anynines blog</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*9TzQyEETg6miWmUF.png" /></figure><p>Always wondered what a NoSQL database and specifically MongoDB is? Then you are exactly right here. In the following, we will explain the strengths and weaknesses of NoSQL databases. You will learn when it makes sense to use such a database and when it makes sense to stick with a more classical and historically known database. To round it all off, we have created an example app with Go that takes a JSON string, stores it in a MongoDB database, and reads the created object from the database again.</p><p>In this article, we are going to present the NoSQL database, using MongoDB as a reference. We’ll point out some strengths and weaknesses of NoSQL databases when it is of interest to use such a database and when you should stick with a more classical and historically known database. We will wrap this article up with a code snippet in Go.</p><p>MongoDB was created to work around the shortcomings of the existing databases in 2007 and released two years later, in 2009.</p><p>If we take a step back and look at the landscape of 2007–2009: it was the birth of cloud computing and Big Data.</p><p>At the time, the Internet was growing exponentially fast, and internet companies started to get thousands, sometimes hundreds of thousands, requests per second. Companies required more and more power to run their application and systems. And unfortunately, existing systems and software of the time (including databases) were just unable to follow such a significant amount of data and requests — they had not been designed nor expected at their inception to handle such load. As a result, whole systems were slowed down and sometimes, in the worst case, completely crashing.</p><p>Back to today! MongoDB is a “source-available cross-platform document-oriented database program”(source: <a href="https://en.wikipedia.org/wiki/MongoDB">Wikipedia</a>)[1]. While this statement sounds like a mouthful, taking the term one by one makes it easy to understand.</p><ul><li>Source available: means that the source code can be viewed</li><li>Cross-platform: means that you can run MongoDB on many environments (e.g., Windows, Linux, macOS)</li><li>Document-oriented: is the design approach used by MongoDB to store the data — in the case of MongoDB, it stores data into a document instead of a table (like MySQL, for example, do). We will discuss that more later in this article.</li><li>Database: is probably the essential keyword here; MongoDB is a database, meaning that its most important role is to store data. =D</li></ul><p>Now that we have a better understanding of MongoDB’s history and definition let’s look at what it offers today. At anynines, we believe that there is always a tool best suited for a defined job, so let’s try to figure out how it opposes relational databases and when we should prefer one over the other.</p><p>As stated earlier, each record in MongoDB is stored as a document. As stated by the <a href="https://docs.mongodb.com/getting-started/java/documents/">official documentation</a> [2]:</p><p><strong>Documents:</strong><br>A record in a MongoDB collection and the basic unit of data in MongoDB. Documents are analogous to JSON objects but exist in the database in a more type-rich format known as BSON</p><p>JSON represents objects derived from JavaScript; however, many programming environments support converting JSON objects into native mapping types.Documents in MongoDB are BSON, a binary data format like JSON but includes additional type data.</p><p><strong>Collection:</strong><br>A grouping of MongoDB documents. A collection is the equivalent of an RDBMS table. A collection exists within a single database. Collections do not enforce a schema. Documents within a collection can have different fields. Typically, all documents in a collection have a similar or related purpose. See Namespaces.</p><p>If you want to know more about documents and collections, we would suggest taking a look at the official documentation for documents <a href="https://docs.mongodb.com/getting-started/java/documents/">here</a> [2], but the whole text could be simplified with the following schema:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*fnnJX9nJSXBzHRU5.png" /></figure><blockquote><strong><em>NOTE:</em></strong></blockquote><blockquote><em>A document is similar to a record from a SQL perspective, while a collection is similar to a table.</em></blockquote><h3>MongoDB Features In A Nutshell</h3><ul><li>MongoDB provides high-performance data persistence</li><li>MongoDB supports a rich query language</li><li>MongoDB supports high availability using replica sets</li><li>MongoDB provides horizontal scalability</li><li>MongoDB supports multiple storage engines</li></ul><p>While a traditional SQL database contains fixed and predefined tables to store the data, a NoSQL database can support different types of data structures. This could be a JSON object which is stored in a document or a simple key-value pair.</p><p>SQL and NoSQL are like light and dark: they are just two sides of the same coin. Just like there would be no night without day, NoSQL would most probably not exist if SQL was not there before. It is important to remember the relationship between the two, because not one is worse than the other. They just have different use cases and you should favor one or the other depending <strong>on your requirements</strong>. But we will discuss that later.</p><p>Both have fundamental differences:</p><p><strong>SQL</strong> <strong>NoSQL</strong></p><p>relational</p><p>Non-relational</p><p>Structure Query Language (SQL)</p><p>No standard query language</p><p>predefined schema</p><p>Dynamic schema, unstructured Data</p><p>vertically scalable <em>which means that we increase the number of resources (eg. CPU, Ram, …)</em></p><p>horizontally scalable <em>which means that we increase the number of instances (eg. we go from 1 to more instances/server)</em></p><p>Table based</p><p>document, key-value, graph or wide-column stores</p><p>As we can see from the table, they are the opposite one to the other.</p><p>While<strong> it is possible</strong> to use an SQL database to do NoSQL work and vice-versa, it should be evident that they will each shine differently depending on the task that is asked out of them.</p><p>The following code snippet shows an example application written in Go that takes a JSON string, stores it in a MongoDB database, and reads the created object from the database again.</p><pre>package main import ( &quot;context&quot; &quot;encoding/json&quot; &quot;log&quot; &quot;time&quot; &quot;go.mongodb.org/mongo-driver/bson&quot; &quot;go.mongodb.org/mongo-driver/mongo&quot; &quot;go.mongodb.org/mongo-driver/mongo/options&quot; ) var jsonData = `{ &quot;first_name&quot;: &quot;John&quot;, &quot;last_name&quot;: &quot;Smith&quot; }` func main() { // Client creation ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() client, err := mongo.Connect(ctx, options.Client().ApplyURI(&quot;mongodb://admin:secret@localhost:27017&quot;)) if err != nil { log.Fatal(err) } defer client.Disconnect(ctx) // Parse the given JSON string into a map data := make(map[string]interface{}) err = json.Unmarshal([]byte(jsonData), &amp;amp;data) if err != nil { log.Fatal(err) } collection := client.Database(&quot;example&quot;).Collection(&quot;my-collection&quot;) // Store the object in the database response, err := collection.InsertOne(ctx, data) if err != nil { log.Fatal(err) } log.Printf(&quot;Inserted ID: %s&quot;, response.InsertedID) // Query an object from the database result := collection.FindOne(ctx, bson.M{ &quot;_id&quot;: response.InsertedID, }) object := make(map[string]string) err = result.Decode(object) if err != nil { log.Fatal(err) } log.Printf(&quot;Queried database object: %v&quot;, object) }</pre><p>After running the application using ` go run main.go`, it should print the following output:</p><pre>2021/02/12 11:34:10 Inserted ID: ObjectID(&quot;602659a2952a3497ed9dccd7&quot;) 2021/02/12 11:34:10 Queried database object: map[_id:602659a2952a3497ed9dccd7 first_name:John last_name:Smith]</pre><p>As you can see from the code snippet, using MongoDB is not more challenging to use than a conventional database from a code perspective. All modern languages have ORM ( <strong>Object-relational mapping</strong>: converting data between incompatible type systems using object-oriented programming languages) that allows developers to quickly translate database entries to language structures or objects — whenever it is from PostgreSQL, MySQL, or MongoDB.</p><p>Therefore it is up to the developers to determine which usage they will do of their data; here, we highlighted the strengths of a NoSQL database using MongoDB as an example, and as it was stated, it is excellent for any big data computing. As developers, it is essential to think from the beginning which usage our product will have and what we expect from it and choose our technological stack accordingly. As the saying goes: when you have a hammer, everything looks like a nail. But sometimes, what you are looking for is a screw and a screwdriver.</p><p>[1] <a href="https://en.wikipedia.org/wiki/MongoDB">https://en.wikipedia.org/wiki/MongoDB</a> <br>[2] <a href="https://docs.mongodb.com/getting-started/java/documents/">https://docs.mongodb.com/getting-started/java/documents/</a></p><p><em>Originally published at </em><a href="https://blog.anynines.com/mongodb-what-is-it-and-when-should-i-use-it/"><em>https://blog.anynines.com</em></a><em> on March 15, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2427621d908d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[High Availability Cloud Data Management — How our Team masters the Data Service Challenge |…]]></title>
            <link>https://anynines.medium.com/high-availability-cloud-data-management-how-our-team-masters-the-data-service-challenge-b78fb8e21d7c?source=rss-a9189043b9c1------2</link>
            <guid isPermaLink="false">https://medium.com/p/b78fb8e21d7c</guid>
            <category><![CDATA[containers]]></category>
            <category><![CDATA[cloud]]></category>
            <category><![CDATA[redis]]></category>
            <category><![CDATA[automation]]></category>
            <category><![CDATA[kubernetes]]></category>
            <dc:creator><![CDATA[anynines]]></dc:creator>
            <pubDate>Fri, 26 Feb 2021 19:37:46 GMT</pubDate>
            <atom:updated>2021-04-19T11:41:09.264Z</atom:updated>
            <content:encoded><![CDATA[<h3>High Availability Cloud Data Management — How our Team masters the Data Service Challenge</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*62DB574wYC9LAwdh.png" /></figure><p>Cloud Computing <a href="https://blog.anynines.com/series/evolution-of-software-development-and-operations/">has changed</a> the way applications are being developed and how services are being operated. The a9s Data Services team has always been part of this change by leveraging popular open-source data services and making them consumable on modern Application Developer Platforms, commonly known as Platform-as-a-Service (PaaS).</p><p>However, operating high availability data service clusters <strong>at scale </strong>comes with many challenges. Learn how our team of highly skilled professionals solves those challenges, so that companies can focus on their core business objectives rather than on operational overhead.</p><h3>Cloud Application Platforms &amp; Data Services</h3><p>Modern cloud-native application platforms like <a href="https://blog.anynines.com/multi-cloud-deployments-with-cloud-foundry/">Cloud Foundry</a> allow developers to build, test and operate distributed, high-available and complex applications composed of micro-services of many different types without the need to cope with the complexity of the underlying infrastructure.</p><p>Such a high degree of automation offered by cloud computing service models and in particular by PaaS reduces the operational overhead of the application development process. Thus it is one major driver of the digital transformation of the economy that happens in a highly dynamic and rapidly changing environment.</p><p>The a9s Data Services team takes these cloud computing paradigms even one step further. It applies the core benefits of application platforms like on-demand self-service, scalability, robustness, ease of use, and full life-cycle automation to those components that are at the heart of any stateful application: backing- or data services. Such data services encompass relational databases, key values stores, messaging systems, and so on.</p><h3>The Data Services Challenge: Full life-cycle Automation at Scale</h3><p>While keeping state, or data, available and consistent in (even geographically) distributed cloud environments is a well-understood challenge with distributed systems, <strong>full life-cycle automation </strong>of data services <strong>at scale </strong>is a new challenge on its own.</p><p>Although it might look easy for a DevOps team to launch hundreds of different types of data services on contemporary PaaS, it isn’t easy to guide such a vast amount of heterogeneous stateful services through their complete life-cycle. Nor is it easy for DevOps to quickly react to increasing load and critical events that require scale-outs, fast failover, or recovery.</p><p>Moreover, full life-cycle management of data services includes day-two operations like backup, restore, logging, and monitoring but also vertical and horizontal scale-outs or version upgrades. And “at scale” means to provide the ability to cope with the management of thousands or ten thousand data service instances that are running on even more virtual machines or containers.</p><p>Last but not least, automation might encompass for example the highly frequented on-demand provisioning of dedicated data service instances by continuous integration systems without any manual intervention of a human operator.</p><p>And this is where the a9s Data Services team kicks in with their <a href="https://www.anynines.com/data-services">data service products</a> that are part of the a9s Platform — a modular framework for building application developer platform solutions that provide a high degree of automation to reduce operational friction.</p><h3>A highly skilled team with a clear Mission</h3><p>The mission statement of the a9s Data Services team is to</p><p><em>“Fully automate the entire life-cycle of a wide range of data services to run on cloud-native platforms across infrastructures at scale.”</em></p><p>Their highly automated solutions leverage industry-leading open-source software technologies like <a href="https://blog.anynines.com/how-to-build-a-production-grade-postgresql-cloud-foundry-service/">PostgreSQL</a> for scalable relational databases or high availability messaging and caching systems like <a href="https://blog.anynines.com/redis-what-is-it-and-when-should-i-use-it/">Redis</a> to bootstrap fully automated production-grade platform environments. Environments that are used in production by enterprise customers from the insurance, banking, automotive, or telecommunication sectors.</p><p>The a9s Data Services team helps those organizations to build better digital products, master the digital transformation and remain successful in the long term despite technology cycles becoming shorter and shorter. Thereby, the team attaches great importance to offer a homogeneous user experience for both platform operators as well as for developers of cloud-native applications. It ensures to deliver rock-solid, proven, and highly reliable services that meet enterprise requirements regarding rigorous availability, durability, and integrity.</p><p>To be able to deliver such high-quality products to enterprise customers, the international and cross-functional team is composed of individuals from various fields and levels of experience. All team members live up to high standards and fully embrace lean and agile values with a focus on continuous learning and improvement. They are always keen to share their insights and lessons learned from researching new trends and experimenting with the latest technologies with the community. For example at international <a href="https://blog.anynines.com/cloud-foundry-summit-eu-2020-what-you-missed/">conferences</a> and via various <a href="https://www.youtube.com/channel/UCpHqiz1v7qK7waDT19E3oHQ">media channels</a>.</p><p>As a result, the team’s knowledge and expertise cover a wide spectrum including an excellent understanding of cloud infrastructures, distributed systems, many different types of data services, and of course automation of such.</p><h3>New Challenges ahead</h3><p>Working with the latest cloud technologies and applying modern, transparent and professional software development processes like pairing sessions, code reviews, test-driven development, continuous integration and deployment, Scrumban, etc. the team can constantly improve its product portfolio.</p><p>The a9s Data Services team has been a thought leader in production-grade data service automation on virtual machines and especially on Cloud Foundry PaaS. The upcoming container trend and the <a href="https://blog.anynines.com/kubernetes-kills-openstack/">rise of Kubernetes</a> as the de-facto standard for container orchestration systems represent a paradigm shift that no one can escape. Kubernetes is becoming the new infrastructure for cloud-native.</p><p>However, such change is not coming without challenges. Challenges that leverage opportunities for new innovative products. With such new products, the a9s Data Services team wants to change the way on how data services are operated and automated on Kubernetes. Therefore we’re looking for platform engineers to help us build the cloud platform of the future. <a href="https://www.anynines.com/career">Learn more.</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6Vsh4gSDQVfxHKxc.jpg" /></figure><p><em>Originally published at </em><a href="https://blog.anynines.com/high-availability-cloud-data-management-how-our-team-masters-the-data-service-challenge/"><em>https://blog.anynines.com</em></a><em> on February 26, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b78fb8e21d7c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[International Migrants Day — Learn How to Join an International Team]]></title>
            <link>https://anynines.medium.com/international-migrants-day-learn-how-to-join-an-international-team-2d26e1f8bd37?source=rss-a9189043b9c1------2</link>
            <guid isPermaLink="false">https://medium.com/p/2d26e1f8bd37</guid>
            <category><![CDATA[migration]]></category>
            <category><![CDATA[anynines]]></category>
            <category><![CDATA[migrants]]></category>
            <dc:creator><![CDATA[anynines]]></dc:creator>
            <pubDate>Fri, 18 Dec 2020 12:36:59 GMT</pubDate>
            <atom:updated>2020-12-18T13:47:02.133Z</atom:updated>
            <content:encoded><![CDATA[<h3>International Migrants Day — Learn How to Join an International Team</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/599/0*peH41DQxlAqVM8t7.jpg" /><figcaption><a href="https://www.pexels.com/photo/blur-cartography-close-up-concept-408503/">Photo by slon_dot_pics from Pexels</a></figcaption></figure><p>December 18th is called “International Migrants Day” and is celebrated with different activities all around the world.</p><p>Migration in our company is referred to as moving to cloud technologies. But today, we decided to focus on people’s migration.</p><p>Migration has been a courageous expression of the individuals, coming out of their comfort zones, leaving their loved ones and their nation, and moving to a new culture and environment.</p><p>Do you want to know why we wrote this article? Click ….</p><h3>Migrants</h3><p>Who are these awesome people?</p><p>People who:</p><ul><li>have tried hard enough for the first steps of migration</li><li>have packed their whole lives in one or maybe 2 pieces of baggages</li><li>have said goodbye to a familiar place where they lived for a long time, mostly their whole lives</li><li>have the necessary bravery to face cultural challenges</li><li>are flexible enough to adopt the new culture</li></ul><p>And last but not least, people who are swimming against the tide.</p><p>We, at anynines, really like these people, appreciate them and we are very proud of them.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*JfIg_CHQ0uSPvfIQ.jpg" /><figcaption><a href="https://unsplash.com/photos/7XGtYefMXiQ">Photo by Katie Moum on Unsplash</a></figcaption></figure><h3>Migrants at anynines</h3><p>We decided to celebrate this day, appreciating our colleagues who have started their new chapter of life by joining us at anynines, a German-based company with an international environment.</p><p>We asked our team members who have experienced migration, to write about themselves and their experiences.</p><blockquote><em>Happy your day!</em></blockquote><blockquote><em>Thanks for joining us.</em></blockquote><p>Get to know some of the people behind our products and services:</p><h4>Bukola Johnson</h4><p>Hello, I am Bukola Johnson from Nigeria. I migrated to Germany a few weeks ago joining the Enterprise Platform Operation(EPO) team of anynines as a DevOps Engineer. I studied computer science at the university after which I have always worked in the tech department/industry. It’s been an amazing journey so far due to the versatile nature of technology and the need to keep innovating. This requires us(techies) to keep learning to stay attune with innovations and I find this quite interesting (and fun) because this helps me to make positive impacts wherever I find myself which serves as my primary reason for being in the Tech world.</p><p>Before moving to Germany, I worked in the Fintech sector in Nigeria whereby we make payments seamless and provide financial access/service for the people (especially the underbanked). I worked with several technologies such as Kubernetes, cloud services including AWS, and Google. As a DevOps Engineer, I am tasked with the primary responsibility of automating deployment processes and helping to improve the reliability, scalability, and availability of products by deploying and managing enterprise solutions (though I am not limited to this primary responsibility)</p><p>After working for several years in Nigeria, my spouse and I decided to try something different in other advanced countries and we discussed and agreed to explore Germany after being impressed with the information and figures of Germany tech startups we gathered.</p><p>Next, we started researching how to get opportunities in Tech companies in Germany, visa/sponsorship requirements, settling down in Germany, and a few other things, though the language difference was a bit of a concern to us. Using LinkedIn and sites (such as Germany startup jobs), I started applying for jobs that fit my skill set.</p><p>Though we received several not too encouraging feedbacks of them preferring local hands or EU citizens😔😔😔, I kept pushing until I saw opportunities with anynines. Quickly, I read about the organization, what they do and I was quite keen on the technology (Cloud Foundry😁) as it’s an opportunity to enrich my expertise as learning new concepts/ideas are things I look forward to. I then applied to anynines and was super-excited when Jessica (HR manager) reached out to me and I later got to have a session with Nico and the team for the technical interview. I got shortlisted after the successful interview and assessment and I was offered the job.</p><p>Yay, I was super elated🤩💪. Then applying for a work visa from Nigeria to Germany took me several months, but in all of this wait, anynines kept in touch with me while I waited for my visa to be approved by the Embassy.</p><p>My story will not be complete if I do not mention the part that anynines played in all of these. Jessica’s professionalism and caring nature is one of the reasons I am here and excited to join anynines, she showed empathy during the lengthy visa situation which lasted almost 10 months. After finally receiving my entry visa in February and billed to travel in March, the Covid-19 situation and restriction strikes like a ‘Sledge Hammer’ and my entry visa expired during the period, so I was close to giving up at that point.😑😑</p><p>Finally arriving in Germany on the 2nd of November 2020 was indeed a journey🤩, but in all, I am grateful for the opportunity anynines is giving to me.</p><p>Settling down in Germany is another experience entirely because I could not travel with my spouse as his visa for family-reunion is not completed yet. So it’s been quite challenging for me to be here all by myself but anynines staff has been great in helping me to achieve this.</p><p>Jessica helped me a lot with accommodation, Melanie helped me with my apartment movement as well as other logistics. The most amazing part is the fact that they helped me stock up my apartment with enough groceries that will last me for my quarantine period even before I arrived 👌😘. They are both amazing.</p><p>Shopping winter outfits and all have been with the help of Helena (my wardrobe consultant🤩), she’s been so attentive when I rant about the best winter jacket I can use to keep me warm. Manu, my teammate in the EPO department, has been such a kind person in welcoming me to the team and Benjamin has been my coach in this new world of Cloud Foundry, and to Nico for believing in me. All these people are so wonderful and my story of migrating and living in Germany will not be complete without them.</p><p>So far, I love it here and I do not regret making the decision to move to Germany and most importantly joining anynines, though the cold is dealing with me because Nigeria is always very warm so this is a different experience for me but I know with time and with the best winter outfit I will be fine.</p><h4>Francisco Germano</h4><p>Hello, World!</p><p>My name is Francisco Germano, I am 30 years old and for 7 years I have been working in the software development area. Currently, I have been living in Brazil where I have been working remotely at Anynines, and It has been making me so happy. I hold a System Information graduation degree and a Project Management MBA degree. Most of my professional experience was working with backend/frontend development, Cloud Computing, and System Distributed environment.</p><p>My first contact with anynines was through a friend of mine. After that, I started to seek more information about the company and I was able to find pretty much good things about it. The first impression was great, especially by the innovative culture, stack of technologies, and good company reputation.</p><p>I have been working at Anynines for 2 months, I am a part of the Data Services team. Since the beginning, all my expectations about the anynines have been fulfilling. Moreover, I have been enjoying working with talented people and in a company well structured.</p><p>I have never been to Germany, but I really would like it. I am looking forward to visiting Germany and meeting the whole anynines team personally. When this whole Corona situation is solved, I will do it for sure. I hope to do it as soon as possible.</p><h4>Heitor Meira de Melo</h4><p>Hi there,</p><p>I’m Heitor, I currently live in Brazil, but have worked for almost 4 years now (time flies indeed).</p><p>I met anynines via a recruiting agency who found me on LinkedIn. I had visited Germany before (what an amazing country!), so I was eager to pack my things and move there. I am so glad that Benjamin and Jessica were responsible to help me establish my new life, and that I could live with Benjamin and his family for a few weeks. I could not ask for people nicer to have this moment of my life.</p><p>After I got to meet the other people in the company, things started to feel more like home. anynines felt more than a company, I could do the job that I loved, in a company that I care. On top of that, Saarbrücken was not difficult to fall in love with, it has such a beautiful landscape, so much green… and France is just across the border.</p><p>I had to come back to Brazil after almost one year, and the company gave me all the support I needed and decided to keep working with me from here. I still meet all my teammates, my friends every year. Counting the days for the nice conversations and of course, Schwenker.</p><h4>Igor Li</h4><p>Hello there!</p><p>My name is Igor. I am from Kyrgyzstan, which is a small country in Central Asia. I am working as a cloud engineer in the anynines.</p><p>In 2018 I started to think about relocation. After researching this topic, I chose Germany because it has an excellent environment to grow children, medical service, and has very good and diverse food.</p><p>In 2019 I had the first interview with anynines, and I spoke with one of the engineers. While we were talking, I got a great impression. His behavior reminded me of one great person I met when I started my career as a software developer. After that, I had a second technical interview, and I liked its flow. I understood how the process of solving tasks looked in a9s, and after that meeting, I said to myself that I would like to work with them. We signed a contract and started the relocation process, and the company helped a lot with that. I want to take off my hat to thank you, Jessica Schuster, who did a great job relocating my wife and me.</p><p>Nowadays, I am happy to work in the anynines, and I like the feeling that I am not the smartest guy in the room and I still can learn something.</p><h4>Marcelo Gonçalves</h4><p>Hello, everyone!</p><p>My name is Marcelo and although I’m doing an internship in the managed system department of anynines, I come from and still live in Brazil. I hold a degree in Information Technology but I first studied Languages and Literature (Portuguese and German). During these studies, I had my first opportunity to visit Germany, where I had unforgettable experiences and made many friends.</p><p>I spent the past few years working as a Portuguese and Literature teacher, studying IT, and visiting Germany whenever I had the chance. After my last “Germany-Vacations”, however, I realized Germany wasn’t only the place I wanted to be visiting every once and a while but the place where I wanted to spend the rest of my life. As you can see, I am kind of in the process of “immigrating” twice at the same time: From teaching to IT and from Brazil to Germany.</p><p>During the last semester of my IT studies, I started looking for companies where I could do an internship and came to know about anynines. My first impressions were the best possible: The company works with the most modern technologies of the market; whenever I wrote them they didn’t send me standard cold messages back but very friendly ones; In my first job interview, for instance, I spent one hour talking about technical stuff with my interviewer but It was actually like talking to a friend.</p><p>With the outbreak of the pandemic, when my plans were about to be frustrated, anynines surprised me allowing me to work remotely and here I am now. The first good impressions I had, have just grown stronger and if I would try to find an analogy to describe this company, it would be something more or less like this: A happy multicultural family growing up together.</p><h4>Martín Valencia Flores</h4><p>Hello,</p><p>My name is Martín Valencia Flores, I come from México (which might explain to you why I have two last names, I’ve been living in Germany a bit over 4 years now, and I am currently a Platform Engineer at anynines.</p><p>I am a Computer Engineer and worked as such (and a bit of tech support) for a small tech company back in my hometown for roughly 3 years until I decided to uproot myself and try to build a life here in Germany.</p><p>The reason why I chose Germany goes a bit back, almost a decade:</p><p>I had the opportunity to take a semester abroad, in Spain. During winter vacations I went to spend Christmas with a friend of my dad’s colleague (yes, a bit of a stretch in the relationship, but it was a nice experience and I became friends with the whole family) at their home in Memmingen. As soon as I landed, I knew I’d love to live in Germany.</p><p>As usual, life always has other plans and it took me almost 8 years to accomplish that goal, in the form of studying for a Master’s.</p><p>Why did I apply for anynines? Well, I was, like many people often find themselves, in the middle of a job hunt. I had a growing interest in Cloud Computing, so I stumbled upon anynines (orange is a color that’s hard to miss), so I applied and hoped for the best. Soon after I got a reply inviting me for an interview related to a position in the Data Services team.</p><p>I was thrilled and nervous.</p><p>From the very beginning, I was marveled at the way my colleagues carried themselves; you could see them for the professional people they are, but you could also see that they are, well, people. To me, that was one of the major swaying points, as it pointed to a healthy work environment. This was cemented in the paring session.</p><p>And so, I’ve been working at anynines for a bit over a year now, and I can say I love working here. Everyone (even the coworkers on the grumpy side) looks out for you one way or the other.</p><p>While I won’t say everything has been Cloud Nine (pun intended, and what I thought was the origin of the company’s name until corrected ^^’), the majority of my experiences have been positive.</p><p>Here are of my happiest moments at anynines:</p><ul><li>The constant moments’ everyone in the company (our CEO included) took to reassure me that they would help me during the long (and problematic!!!) visa approval process</li><li>The moment Jessica, with a smile on her face, came to my desk and handed me the long-awaited reply from the Ausländerbehörde</li></ul><h4>Sabbir Ahmed</h4><p>Hello everyone.</p><p>I’m Sabbir from Bangladesh and I moved to Germany one year ago to do my Master’s in Informatics. I joined the anynines company in March 2020 as a working student. My field of expertise is primarily Frontend Development for Web Applications and that’s what I’m working with right now.</p><p>I chose Anynines because this company works with state of the art software technologies to provide reliable and robust products for the customers. It holds very well experienced Developers and experts who can teach me and bring me forward to my goal of becoming a successful Software Engineer in the future.</p><p>My so far journey with anynines has been amazing and very rewarding. My team members are super friendly and cooperative in all aspects. I also love the fact that our office space is very pet friendly and organizes team events to cheer us up when we feel a bit down.</p><p>I’m extremely grateful to be a part of this company for the last 10 months and looking forward to more wonderful experiences ahead.</p><p>Thank you.</p><p>Thank you for joining us.</p><blockquote><em>Happy your day. We are proud of you.</em></blockquote><p>Are you a migrant working in a tech company? Leave us a comment with your experience.</p><p><em>Originally published at </em><a href="https://blog.anynines.com/international-migrants-day-learn-how-to-join-an-international-team/"><em>https://blog.anynines.com</em></a><em> on December 18, 2020.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2d26e1f8bd37" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Prometheus Pushgateway on Cloud Foundry with Basic Authentication]]></title>
            <link>https://anynines.medium.com/prometheus-pushgateway-on-cloud-foundry-with-basic-authentication-69dda71ec356?source=rss-a9189043b9c1------2</link>
            <guid isPermaLink="false">https://medium.com/p/69dda71ec356</guid>
            <category><![CDATA[prometheus]]></category>
            <category><![CDATA[pushgateway]]></category>
            <category><![CDATA[cloud-foundry]]></category>
            <dc:creator><![CDATA[anynines]]></dc:creator>
            <pubDate>Tue, 27 Oct 2020 14:33:40 GMT</pubDate>
            <atom:updated>2020-10-30T10:51:50.588Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*lnv4DQYIL-kzaXvx.png" /></figure><p>Authentication methods are currently not supported in Prometheus, nor its related components like the Pushgateway.</p><p>However, this can be added via reverse proxies. Pushing it as an app to Cloud Foundry with neither the go-buildpack nor the binary-buildpack will give you such functionality. The staticfile-buildpack supports <a href="https://docs.cloudfoundry.org/buildpacks/staticfile/index.html#basic-authentication">configuring basic authentication</a>.</p><p>This article describes how to push the Pushgateway to Cloud Foundry using the staticfile-buildpack with basic authentication.</p><h3>Instructions</h3><h3>Create Working Directories</h3><p>First, let’s create a workspace directory for our project and change into it. Every file we create will be placed into that directory. The rest of the tutorial is using paths relative to that directory, so you may change it to your liking.</p><pre>$ mkdir -p ~/workspace/apps/pushgateway $ cd ~/workspace/apps/pushgateway</pre><h3>Download and Extract Pushgateway Binary</h3><p>We will use the binary release of the Pushgateway, so head over to the <a href="https://github.com/prometheus/pushgateway/releases">Github release page</a> and download the respective Linux-amd64 archive of the version compatible with your Prometheus installation. For our purpose, we’ll be using the current version 1.0.0.</p><pre>$ wget https://github.com/prometheus/pushgateway/releases/download/v1.0.0/pushgateway-1.0.0.linux-amd64.tar.gz $ wget https://github.com/prometheus/pushgateway/releases/download/v1.0.0/sha256sums.txt $ sha256sum -c &amp;lt;(grep pushgateway-1.0.0.linux-amd64.tar.gz sha256sums.txt) pushgateway-1.0.0.linux-amd64.tar.gz: OK</pre><p>If the shasum check is OK, extract the binary and remove what is not needed anymore.</p><p>On Linux (GNU tar installed):</p><pre>$ tar xzf pushgateway-1.0.0.linux-amd64.tar.gz --wildcards --strip-components=1 */pushgateway $ rm pushgateway-1.0.0.linux-amd64.tar.gz sha256sums.txt</pre><p>For MacOS users (BSD tar installed):</p><pre>$ tar xzf pushgateway-1.0.0.linux-amd64.tar.gz --strip-components=1 */pushgateway $ rm pushgateway-1.0.0.linux-amd64.tar.gz sha256sums.txt</pre><h3>Add Basic Authentication</h3><p>The staticfile-buildpack is designed to serve files like HTML or CSS via an integrated NGINX server. Basic authentication is set using a special file Staticfile.auth that the buildpack will read at container start and setup NGINX accordingly.</p><p>The file is in the typical .htpasswd file format and can be created using the following command (I will use pushgateway as the username, but you may choose whatever suits you):</p><pre>$ echo &quot;pushgateway:$(openssl passwd -1)&quot; &amp;gt; Staticfile.auth</pre><p>Set the password and confirm it and you’re good to go.</p><h3>Create an Application Manifest</h3><p>Everything except for the manifest.yml and files/folders listed in a .cfignore will be integrated to /home/vcap/app in the container.</p><p>NGINX is started by default via app/boot.sh on container start. But since we also want to start the pushgateway process, this needs to be changed.</p><p>Create the following manifest.yml file your workspace folder:</p><pre>--- name: pushgateway disk_quota: 256M memory: 256M command: exec /home/vcap/app/start.sh buildpack: staticfile_buildpack</pre><p>This will change the command executed at container start to /home/vcap/app/start.sh, which has yet to be created by us.</p><h3>Create the Start Script</h3><p>All you need to do is to start the pushgateway in the background and continue with the default startup.</p><pre>#!/bin/bash /home/vcap/app/pushgateway --web.listen-address 127.0.0.1:9091 &amp;amp; /home/vcap/app/boot.sh</pre><p>However, this does not redirect requests to 127.0.0.1:9091 yet. Instead, it serves all files in the workspace directory to the world, which is not what we want.</p><h3>Add a Custom Location</h3><p>To change this, the buildpack offers a way to customize the location block. We have to set root and location_include in a file named Staticfile:</p><pre>root: htdocs location_include: includes/*.conf</pre><p>I have chosen htdocs to serve as the root location, but you can use whatever you like. But if you leave it like that, you will get a staging error because the root directory is empty.</p><pre>-----&amp;gt; Root folder /tmp/app/htdocs **ERROR** Invalid root directory: the application Staticfile specifies a root directory htdocs that does not exist</pre><p>All you need to do is add a dummy index.html file</p><pre>mkdir -p htdocs echo &quot;&amp;lt;h1&amp;gt;Nothing here&amp;lt;/h1&amp;gt;&quot; &amp;gt; htdocs/index.html</pre><p>It doesn’t matter what content you add here, because it should actually never be displayed.</p><p>The location_include directive is relative to the /home/vcap/app/nginx/conf directory. NGINX will load any *.conf file there which we will use to configure the location. But as the workspace directory is added to /home/vcap/app, we need to create additional directories.</p><pre>mkdir -p nginx/conf/includes</pre><p>Then add the following content to nginx/conf/includes/pushgateway.conf with your favorite editor:</p><pre>proxy_pass http://127.0.0.1:9091/; proxy_set_header	Host $host; proxy_set_header	X-Real-IP $remote_addr; proxy_set_header	X-Forwarded-for $remote_addr;</pre><h3>Adjust Scrape Configuration</h3><p>Your scrape config must be adjusted to scrape with basic authentication (see <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config">Prometheus Configuration</a>).</p><p>If you deploy Prometheus with <a href="https://github.com/bosh-prometheus/prometheus-boshrelease">prometheus-boshrelease</a>, the internal pushgateway does not have basic-auth support at the moment and there are no ops files to scrape an external pushgateway. But you can add a custom ops file like the following to the deployment to make up for it.</p><pre>- type: replace path: /instance_groups/name=prometheus2/jobs/name=prometheus2/properties?/prometheus/scrape_configs/- value: job_name: pushgateway static_configs: - targets: - ((pushgateway_host)) scheme: https basic_auth: username: ((pushgateway_basicauth.username)) password: ((pushgateway_basicauth.password))</pre><p>If you use credhub to store your deployment secrets (recommended), you need to set the pushgateway_basicauth user credentials and pushgateway_host.</p><p>For example using the credhub CLI (replace the paths and values with respect to your environment):</p><pre>$ credhub set -n /bosh/prometheus/pushgateway_host -t value -v pushgateway.app.domain.tld $ credhub set -n /bosh/prometheus/pushgateway_basicauth -t user --username=pushgateway --password=&quot;REPLACE_ME&quot;</pre><p>In case you already have an external pushgateway setup without authentication, you can deploy this change without breaking metrics scraping. This is how we transitioned to an external pushgateway with basic authentication smoothly. That is because a pushgateway without an authentication set will just ignore the additional credentials. In the next step, you can then push the basic-auth enabled pushgateway to Cloud Foundry.</p><p>The only thing that might break is if your configuration so far does not use HTTPS, but you could also just remove the scheme line from the ops file above to use HTTP (not recommended).</p><h3>Push the App</h3><p>The final step is just to push the pushgateway to Cloud Foundry by simply using</p><pre>$ cf push</pre><p>after you logged in and chose the org/space where you want it to be deployed.</p><h3>Test</h3><p>Finally, let’s test pushing metrics and reading them from the pushgateway.</p><p>Should fail:</p><pre>$ metrics=&quot;# TYPE test_metric gauge test_metric{job=\&quot;test\&quot;} 0&quot; $ curl -i --data-binary @&amp;lt;(echo &quot;$metrics&quot;) https://pushgateway.app.domain.tld/metrics/job/test HTTP/1.1 401 Unauthorized [...] $ curl -i https://pushgateway.app.domain.tld/metrics HTTP/1.1 401 Unauthorized [...]</pre><p>Should work:</p><pre>$ metrics=&quot;# TYPE test_metric gauge test_metric{job=\&quot;test\&quot;} 0&quot; $ curl -i --user &quot;pushgateway:REPLACE_ME&quot; --data-binary @&amp;lt;(echo &quot;$metrics&quot;) https://pushgateway.app.domain.tld/metrics/job/test HTTP/1.1 202 Accepted [...] $ curl -i --user &quot;pushgateway:REPLACE_ME&quot; https://pushgateway.app.domain.tld/metrics HTTP/1.1 200 OK [...]</pre><h3>Final Thoughts</h3><p>In this tutorial, we have seen how to use staticfile-buildpack to deploy the Prometheus Pushgateway to Cloud Foundry with basic authentication. It is a rather long procedure and there were some traps on the way because the staticfile-buildpack probably isn’t directly meant for this kind of usage, but once done even updates are rather simple by simply exchanging the Pushgateway binary.</p><p>One thing, however, is not covered here and should be improved upon, and that is health monitoring/restarting of the pushgateway process in case it crashes. But this is for another time.</p><p><em>Originally published at </em><a href="https://blog.anynines.com/prometheus-pushgateway-on-cloud-foundry-with-basic-authentication/"><em>https://blog.anynines.com</em></a><em> on October 27, 2020.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=69dda71ec356" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Kubernetes: Finalizers in Custom Resources]]></title>
            <link>https://anynines.medium.com/kubernetes-finalizers-in-custom-resources-b802701dee9c?source=rss-a9189043b9c1------2</link>
            <guid isPermaLink="false">https://medium.com/p/b802701dee9c</guid>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[minikube]]></category>
            <category><![CDATA[postgresql]]></category>
            <category><![CDATA[kubectl]]></category>
            <category><![CDATA[yaml]]></category>
            <dc:creator><![CDATA[anynines]]></dc:creator>
            <pubDate>Tue, 29 Sep 2020 09:49:00 GMT</pubDate>
            <atom:updated>2020-10-30T10:56:28.840Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*yR5K7xtoqINuU7_2.png" /></figure><p><strong>Authors:</strong> Matthew Doherty, Philipp Kuntz, Robert Gogolok</p><p>When extending the Kubernetes API with <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/">CustomResourceDefinitions</a> you’ll come across the dilemma to clean up external resources when deleting a custom resource. Although you can create a custom resource simply to store and retrieve structured data, most of the time there is some entity involved, like custom controllers. The controller will manage this resource and create other external resources to handle the semantics of that resource. Those external resources should not live forever once the custom resource does not exist anymore.</p><p>In the following text, we’ll work with a custom resource example that represents a data service instance.</p><p>That data service instance could be, for example, an instance running a PostgreSQL database. That service instance might store data to an external blob store, for instance, AWS S3 during a backup operation. Once you want to get rid of this custom resource and therefore that service instance, you might want to clean up the backups that were created specifically for this data service instance (for example on AWS S3).</p><p>This is where <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#finalizers">Kubernetes Finalizers</a> come into play and help to clean up external resources before the deletion of a custom resource. You can add finalizers to a custom resource that will prevent for example the kubectl tool from deleting it.</p><h3>Demo</h3><p>Let’s do a practical demo with <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/">Minikube</a> on how finalizers can help to prevent the deletion of custom resources.</p><p>You can install Minikube using the <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/">Install Minikube instructions</a>. Once it is up and running, you should be able to call <em>kubectl get crd</em> without an error.</p><p>In order to create a custom resource, we first need to create a custom resource definition. Please copy the following content to a file named <em>customresourcedefinition.yaml</em>:</p><p>It is a custom resource definition for the above-mentioned example of PostgreSQL service instances.</p><p>After creating the file, we’re ready to upload our custom resource definition to Kubernetes:</p><pre>$ kubectl apply -f customresourcedefinition.yaml customresourcedefinition.apiextensions.k8s.io/serviceinstances.example.com created</pre><p>Next we’ll create a custom resource fitting our custom resource definition. Create the following content in a file named <em>customresource0.yaml</em>:</p><pre>apiVersion: &quot;example.com/v1&quot; kind: ServiceInstance metadata: name: my-new-service-instance0 finalizers: - my-finalizer.example.com spec: service: PostgreSQL version: &quot;12&quot;</pre><p>Then we apply the custom resource using:</p><pre>$ kubectl apply -f customresource0.yaml serviceinstance.example.com/my-new-service-instance0 created</pre><p>Under <em>metadata.finalizers</em> we’ve added an entry for a finalizer called <em>my-finalizer.example.com</em>.</p><p>So far this doesn’t play a role and a new ServiceInstance resource has been created with the name <em>my-custom-resource0</em>.</p><p>We can get the resource’s yaml representation using:</p><pre>$ kubectl get si my-new-service-instance0 -o yaml ... apiVersion: example.com/v1 kind: ServiceInstance metadata: ... creationTimestamp: &quot;2020-09-09T21:36:56Z&quot; finalizers: - my-finalizer.example.com ... name: my-new-service-instance0 ... spec: service: PostgreSQL version: &quot;12&quot;</pre><p>Let’s try to delete the resource using:</p><pre>$ kubectl delete -f customresource0.yaml serviceinstance.example.com &quot;my-new-service-instance0&quot; deleted</pre><p>After outputting the delete line, <em>kubectl</em> is hanging.</p><p>In another shell we can now output the yaml representation of that custom resource again using:</p><pre>$ kubectl get si my-new-service-instance0 -o yaml apiVersion: example.com/v1 kind: ServiceInstance metadata: ... deletionTimestamp: &quot;2020-09-09T21:52:00Z&quot; ...</pre><p>Kubernetes has added the field <em>metadata.deletionTimestamp</em> to signal the intention is to delete that resource. The finalizer entry we’ve added is preventing Kubernetes from deleting that custom resource.</p><p>In order to get rid of the resource, we need to remove the finalizer entry and signal this way we’ve removed external resources locked by that finalizer name.</p><p>Let’s edit the file <em>customresource0.yaml</em> and remove the finalizer. The file should now look similar to the following content:</p><pre>apiVersion: &quot;example.com/v1&quot; kind: ServiceInstance metadata: name: my-new-service-instance0 spec: service: PostgreSQL version: &quot;12&quot;</pre><p>Let’s apply the changes:</p><pre>$ kubectl apply -f customresource0.yaml serviceinstance.example.com/my-new-service-instance0 configured</pre><p>When we switch back to the hanging <em>kubectl</em> command, we can see it succeeded. The custom resource has been removed since the list of finalizers is empty.</p><p>The implications for Kubernetes are that all finalizers have been executed and have done their job.</p><h3>Conclusion</h3><p>Specifying finalizers can prevent a custom resource from deletion. This gives the opportunity to clean up external resources associated with the custom resource.</p><p>In a future article, we’ll extend our knowledge to Kubernetes operators and how to protect custom resources with finalizers during reconciliation.</p><p><em>Originally published at </em><a href="https://blog.anynines.com/kubernetes-finalizers-in-custom-resources/"><em>https://blog.anynines.com</em></a><em> on September 29, 2020.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b802701dee9c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Impact of a digital transformation on software development]]></title>
            <link>https://medium.com/anynines/impact-of-a-digital-transformation-on-software-development-2a3f302921ae?source=rss-a9189043b9c1------2</link>
            <guid isPermaLink="false">https://medium.com/p/2a3f302921ae</guid>
            <category><![CDATA[application-platform]]></category>
            <category><![CDATA[digital-transformation]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[operations-software]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[anynines]]></dc:creator>
            <pubDate>Fri, 16 Mar 2018 13:55:50 GMT</pubDate>
            <atom:updated>2019-01-15T11:31:04.472Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ncuHJ0abLa15B5PT0PODEA@2x.png" /></figure><h4>Evolution of Software Development and Operations — Part 1</h4><p>In the past 10 years software development has changed significantly. So did software operations.</p><p>Software development and operations form a continuous interplay.</p><p>Let’s have a closer look, outline these changes and see how they lead to the emergence of modern application platforms. Find the cutting edge and see some of the current challenges.</p><p>The following chapters will walk through a history of software operations. The intention here is not to come up with a perfect re-narration but to roughly describe the development of software operations over time.</p><p>By getting into the spirit of a certain operational era you will receive a vivid understanding of its particular challenges. From there it is much easier to understand the subsequent evolutionary step as a logical response to a particular set of challenges typical for the corresponding epoch.</p><p>The same is applied to the chapters describing the evolution of software development. This development is also segmented to illustrate the challenges and impact of eras and their resulting technological innovations.</p><h3>Physical servers</h3><p>Production applications have long been operated on physical servers. Often a single server has been used. Depending on the uptime requirements this may have been something from an off-the shelf commodity server to high-end server hardware.</p><p>Classic web applications on such a physical server often consisted of an application server process, database server process. Files and other assets the application receives or produces have been stored to the server’s filesystem.</p><p>A LAMP stack, for example, has been such a typical web stack. <strong>LAMP</strong> means <strong>L</strong>inux, <strong>A</strong>pache, <strong>M</strong>ySQL and <strong>P</strong>HP. While it’s not so important what application server, database implementation or language is used. The point is that all these components are <strong>located on a single physical machine</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6Yoej7--SDt3ea-L52DVTQ.png" /><figcaption>Classical server stack.</figcaption></figure><p>This makes the server a <strong>SPOF = single point of failure</strong>. When (and not if) the server fails, the entire application goes down. With a single server you may be lucky and it keeps running for years. Even with a cheap server you pick a winner. <strong>With hundreds of servers statistics kick in</strong> and <strong>hardware failure become a regular task</strong> consuming significant work time to recover.</p><blockquote><em>With hundreds of servers statistics kick in and hardware failure become a regular task consuming significant work time to recover.</em></blockquote><p>The quality of services heavily depends on the organization of the datacenter and hardware. Technicians must respond quickly and spare parts have to be at hand.</p><p>Ideally, these spare parts do not come from the same batch. Components such as hard drives are more likely to fail over age. Replacing one HDD with another from the same batch may lead to sequences of failure.</p><p>A series of five failing HDDs have been reported for a single server in a single week as the server provider did supply used replacement HDDs from the same batch as the failed part.</p><p>Other frequent failing parts are power supplies. They shut down, the server shuts down. There are servers with redundant power supplies, though. If one power supply fails, the other takes over.</p><p>Of course you need to pay more for a having a 2nd power supply and the corresponding failover electronics. More than that, a 2nd power supply needs a 2nd power line. Ideally, this power line is independent from the first to also protect against a failure of the first power line. Costs escalate quickly.</p><p>Each <strong>hardware failure also affects the software layer</strong>. Failed HDDs or RAID-system may cause a loss of data. A corrupted filesystem may cause a loss of data. A corrupted data base may cause a loss of data. Losing data is everybody’s nightmare.</p><p>Therefore, a solid <strong>backup &amp; restore</strong> <strong>strategy is absolutely essential</strong>. Just assume that every possible failure happens from time to time. Looking at this list, you need at least protection from the most likely failure scenarios.</p><p>Even <strong>with a backup &amp; recovery strategy, failures are not neutralized</strong>. It always implies harm to business. A potential data loss resulting from the delta between production data and its most recent backup is often unavoidable in a single server scenario. Still, this isn’t necessarily the most harmful aspect of an outage.</p><p>The fact that <strong>the application may be down for hours while being recovered</strong>, is also a big influence to the overall damage. Customers won’t be able to use the application. Data being send automatically to the application may be lost if sending systems do not come with a robust re-try logic.</p><p>That’s why the <strong>mean time to recover (MTR)</strong> is an important operational quality of any system. As the name implies it gives hint about the recovery time to be expected in case of a disaster. It is therefore wise to <strong>optimize the MTR during system design</strong> by <strong>providing</strong> appropriate <strong>redundancies</strong> and <strong>avoid SPOFs</strong> wherever possible and affordable.</p><p>Applying this strategy, ideally a level 1 incident causing a <strong>system wide failure can be degraded to</strong> <strong>a level 2 or level 3 incident</strong> with a loss of redundancy <strong>leaving the system fully operational</strong>.</p><p>For the hardware of a single server, reducing MTR could mean having a technician with spare parts at hand. But think of the <strong>time a sysop needs to manually setup a server stack</strong>. Installing the operating system (OS), the application and database server, configuring both services, deploying the application, configuring and starting it, setting up the backup, monitoring and logging.</p><p>The list is long and so will be the day of the sysop. <strong>It may take hours to recover the software side of the server failure, alone</strong>. For this reason, even with a physical server it is meaningful <strong>apply automation to software installation and configuration</strong> as this reduces the MTR and thus the damage resulting from server outages.</p><p>Let’s step back for a second. Imagine the single server pattern is repeated many times. It leads to a data center full of unconnected, dedicated servers. Experience showed that the overall load of these servers is unevenly distributed.</p><p>Often the average load of such a data center is below 10% leading to a gigantic waste. Not only servers cost money, they consume power and produce heat. Heat must be cooled consuming even more power. Power needs to be redundant so that emergency generators need to be scaled accordingly.</p><p>There are two lessons learned from this. <strong>Repairing physical servers</strong> and<strong> recovering the software layer are key factors to the MTR</strong> and thus overall availability.</p><p>In order to overcome these issues <strong>clusters of servers can be build eliminating single point of failures (SPOFs)</strong>. A cluster provides higher uptime as it decouples the availability of an application from the availability of a single server.</p><p>Another dimension to address the above mentioned issues is applying virtualization and <a href="https://blog.anynines.com/evolution-of-software-development-and-operations-part-2/#Devops%20Gen%201">software automation</a> ultimately converging into <a href="https://blog.anynines.com/evolution-of-software-development-and-operations-part-3/">infrastructure as code</a> and <a href="https://blog.anynines.com/application-platforms/">application platforms</a>.</p><h3>Physical Clusters</h3><p>As described earlier, a solo server is a single point of failure and comes with the risk of hours-long downtimes. Therefore, the combination of servers to clusters is a known strategy to increase the system’s uptime.<br>Let’s briefly walk through taking a single-server-setup to next level by transforming it into a cluster of servers.</p><p>Many books describe how clusters can be build. For the sake of simplicity, we assume having a web stack as described earlier. Imagine a monolithic version of a Facebook-like social web app. It needs to store user data including assets such as uploaded images and videos. Structured data such as profile information, friendship information and posts are stored in a relational database management system (RDMS). A database like MySQL or PostgreSQL will do. Assets such as pictures and videos are stored to the filesystem of the server.</p><p>In the following the single server setup is scaled out to several servers. Both load and redundancy aspects will be discussed along the way.</p><p>In order to <strong>eliminate the application server as a SPOF</strong>, an <strong>additional application server</strong> needs to be <strong>added</strong>. So we need another physical server. This scale out will also increase the load capacity as more user requests can be served by two rather than a single application server.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*U95liuWN4b5VI5J2t5mSeA.png" /><figcaption>Single server with component.</figcaption></figure><p>Now we have to application servers each running our application. However, the domain can only resolve to a single host. So we need to <strong>add a load balancer</strong>.<br>The load balancer’s job is to accept <strong>incoming requests</strong> on a public network interface and <strong>balance it across the application servers</strong> on a private network. This implies that the data center needs to be flexible allow the creation of private networks which not every provider is willing to do.</p><p>The load balancer adds another physical machine.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Tw1BapCrMz1P0aiq_eoE0A.png" /><figcaption>App server.</figcaption></figure><p>Now you have a load balancing across the application servers. If one application server fails, the other can still serve your app. But wait, this doesn’t work at the moment because our database is still co-located on one of the application servers.</p><p>So let’s <strong>move the database to a separate server</strong>, for now. We will later take care of its redundancy as there’s still an issue with the application server setup.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*I2XY33Uz7fcloDjqifIp2Q.png" /><figcaption>Load balancer, 2 app servers and 1 data base server.</figcaption></figure><p>Setting up the app on two application servers causes a new challenge. Files stored on the filesystem by our application are randomly put on one of the two application servers.<br>At this point, users would not see their pictures or videos whenever their requests are balanced to the wrong application server. To overcome this issue, to store assets in a common asset store accessible by both application servers.</p><p>We <strong>do not use NFS</strong> as this <strong>neither scales</strong> well <strong>nor</strong> does it <strong>provide the adequate redundancy</strong>. This problem escalates quickly, as solutions to storing assets such as <strong>OpenStack Swift require 3 to 5 servers</strong> to reach their full availability potential.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RsGcndL9GXpv-S-wEpfpjQ.png" /><figcaption>Load balancer, 2 app servers and 1 data base server with 3 OpenStack Swift servers.</figcaption></figure><p>With an object store in place, applications now can write assets to the shared asset store. Users can retrieve them either directly from the object store or being proxied through the application servers.</p><p>Time to look at the database. Although on a dedicated server, it’s still both a SPOF as well as a potential bottleneck. <strong>Most RDBMs don’t scale horizontally</strong> so that you are forced to scale it vertically by buying bigger machines.<br>As overcoming the limitations of an RDBMS may require fundamental changes in the architecture of your software we skip this issue for now.</p><p>To keep it simple here let’s assume the database won’t be a performance bottleneck for a while. So we rather <strong>focus on eliminating the Database as a SPOF</strong>.</p><p>PostgreSQL, for example, supports <strong>asynchronous replication</strong> out of the box. By <strong>adding a cluster manager such as </strong><a href="https://repmgr.org/"><strong>repmgr</strong></a> also <strong>failure detection</strong> and <strong>automatic failover</strong> capabilities are added. So the cluster adds another two (2) physical machines as the DB cluster needs three (3) machines in total.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gqs_kFp8ZlM5Hkxzp1FxKg.png" /><figcaption>Load balancer, 2 app servers, 3 data base server with 3 OpenStack Swift servers.</figcaption></figure><p>Let’s look at the architecture now. The app servers, the object store and the database are redundant. But the load balancer isn’t.</p><p>So let’s <strong>add another load balancer</strong>.</p><p>The <strong>load balancers also need a cluster manager</strong>. The cluster manager is responsible of sending and verifying <strong>heart beats</strong> from cluster nodes and <strong>trigger a failover</strong> if necessary.<br>Part of the failover procedure is the <strong>allocation of the public load balancer’s IP</strong> as this is required to retrieve incoming traffic. The load balancer does not maintain a relevant state so there’s no urgent need for a quorum based algorithm here. Therefore, we can leave it with two instead of three cluster nodes in contrast to the database cluster.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*I8pJLPUw8vGZyYJwlhHEJg.png" /><figcaption>2 Load balancers, 2 app servers, 3 data base servers and 3 OpenStack Swift servers.</figcaption></figure><p>We are now looking at <strong>nearly a dozen servers</strong> and many system components. The overall <strong>maintenance effort of such a cluster is extensive</strong> not comparable to that of a single server.<br>This scenario mandatorily requires either a group of people or — preferably — consequent automation.</p><blockquote><em>Generally speaking, clusters like this are expensive in both regards labor and hardware.</em></blockquote><p>Generally speaking, clusters like this are expensive in both regards labor and hardware. Complexity and costs have been obstacles for smaller applications to benefit from these topologies for a long time.</p><p>You can put many of these components on — let’s say — three machines. However, without proper virtualization or containerization, the isolation between the different processes such as load balancer, application and database may cause issues and undesired interactions. Hardware costs would be reduced but the level of complexity remains.</p><p><em>Originally published at </em><a href="https://blog.anynines.com/software-development-changed/"><em>blog.anynines.com</em></a><em> on March 16, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2a3f302921ae" width="1" height="1" alt=""><hr><p><a href="https://medium.com/anynines/impact-of-a-digital-transformation-on-software-development-2a3f302921ae">Impact of a digital transformation on software development</a> was originally published in <a href="https://medium.com/anynines">anynines</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>