<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Engineering @ Chargebee - Medium]]></title>
        <description><![CDATA[Chargebee is the world’s leading Revenue Growth Management (RGM) platform for subscription businesses. This blog lets our engineering team engage with the larger tech universe. Dive into our stories from under the hood! - Medium]]></description>
        <link>https://medium.com/chargebee-engineering?source=rss----3e2581cef21d---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 24 Apr 2026 08:25:41 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/chargebee-engineering" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Inside Chargebee Java SDK v4: A Practical Redesign]]></title>
            <link>https://medium.com/chargebee-engineering/inside-chargebee-java-sdk-v4-a-practical-redesign-2a6f456ecf9d?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/2a6f456ecf9d</guid>
            <category><![CDATA[billing]]></category>
            <category><![CDATA[developer-experience]]></category>
            <category><![CDATA[sdk]]></category>
            <category><![CDATA[java]]></category>
            <category><![CDATA[chargebee]]></category>
            <dc:creator><![CDATA[KP]]></dc:creator>
            <pubDate>Wed, 18 Mar 2026 13:47:05 GMT</pubDate>
            <atom:updated>2026-03-18T13:47:03.642Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>TLDR:</strong> Chargebee Java SDK v4 is a ground-up rewrite with immutable thread-safe clients, type-safe parameters and responses, async support with CompletableFuture, and a builder-first API. This post covers the design changes, trade-offs, and migration patterns from v3.</p><p><strong>Billing is infrastructure. Treat it like one.</strong></p><p>Billing is rarely a core product differentiator, but it has strict correctness and reliability requirements. A mishandled webhook or a race condition in subscription renewal can lead to duplicate charges, reconciliation issues, and support escalations.</p><p>This is why many teams use platforms like <a href="https://www.chargebee.com/">Chargebee</a>. It handles the billing lifecycle — subscriptions, invoicing, revenue recognition, dunning, and usage-based billing — so application services can focus on domain logic.</p><p>In practice, <strong>a billing platform is only as effective as its SDK for developers</strong>. Backend capability is less useful when integration code is difficult to reason about, hard to test, or prone to regressions.</p><p>That’s why we rebuilt the Chargebee Java SDK from the ground up.</p><p><strong>What was wrong with v3?</strong></p><p>v3 worked. It powered thousands of integrations across startups and enterprises. But it carried patterns from an era when Java development looked very different:</p><p><strong>Global mutable state.</strong> Configuration lived in a static singleton.</p><pre>// v3: One global config. Hope nobody changes it mid-request.<br>Environment.configure(&quot;acme&quot;, &quot;cb_test_xxx&quot;);</pre><p>If you’ve ever debugged a multithreaded issue where two threads were hitting different Chargebee sites using the same global `Environment`… you know the pain. That’s not a billing problem. That’s a “your SDK fights your architecture” problem.</p><p><strong>Untyped responses.</strong> Every API call returned a generic Result object. You had to <em>*know*</em> which resource to extract.</p><p><strong>Flat parameter lists.</strong> Nested objects like billing addresses were flattened into method chains like billingAddressCity(), billingAddressState() — functional, but far from idiomatic Java.</p><p>v3 was a product of its time. v4 is built for how Java teams work today.</p><p><strong>v4: Java-native by design</strong></p><p><strong>Immutable, Thread-safe client</strong></p><p>The ChargebeeClient is the single entry point. It’s immutable, thread-safe, and built with the fluent builder pattern that every modern Java library uses.</p><pre>ChargebeeClient client = ChargebeeClient.builder()<br>  .siteName(&quot;acme&quot;)<br>  .apiKey(&quot;cb_test_xxx&quot;)<br>  .timeout(30000, 80000)<br>  .retry(<br>    RetryConfig.builder()<br>      .enabled(true)<br>      .maxRetries(3)<br>      .baseDelayMs(500)<br>      .build()<br>  )<br>  .build();</pre><p>Create it once. Inject it everywhere. No global state, and no synchronised workarounds around billing calls.</p><p><strong>Type-safe everything</strong></p><p>Every API operation now has <strong>typed parameter builders</strong> and <strong>typed response objects</strong>. IDE autocomplete becomes reliable, and the compiler catches structural mistakes earlier.</p><pre>CustomerCreateResponse response = client.customers().create(<br>CustomerCreateParams.builder()<br>  .firstName(&quot;Ada&quot;)<br>  .lastName(&quot;Lovelace&quot;)<br>  .email(&quot;ada@example.com&quot;)<br>  .billingAddress(<br>      CustomerCreateParams.BillingAddressParams.builder()<br>        .line1(&quot;50 Market St&quot;)<br>        .city(&quot;San Francisco&quot;)<br>        .state(&quot;CA&quot;)<br>        .zip(&quot;94105&quot;)<br>        .country(&quot;US&quot;)<br>        .build()<br>       ).build());<br>Customer customer = response.getCustomer();</pre><p>No more stringly-typed parameters. No more generic Result objects. If the method compiles, the parameter structure is valid. That’s the kind of confidence you want when you’re processing payments.</p><p><strong>Async out of the box</strong></p><p>v4 ships with first-class async support. Every service method has an Async variant that returns a CompletableFuture:</p><pre>CompletableFuture&lt;CustomerCreateResponse&gt; future =<br>  client.customers().createAsync(<br>    CustomerCreateParams.builder()<br>      .firstName(&quot;Ada&quot;)<br>      .build()<br>    );<br>future.thenAccept(response -&gt; {<br>  log.info(&quot;Created customer: {}&quot;, response.getCustomer().getId());<br>});</pre><p>If you’re running a high-throughput system — say, processing usage records for a metered billing model — blocking threads on HTTP calls is a luxury you can’t afford. Async support means your billing integration scales with your application, not against it.</p><p><strong>Structured exception handling</strong></p><p>v3 gave you string error codes and public fields. v4 gives you a proper exception hierarchy with strongly-typed error enums:</p><pre>try {<br>  client.customers().create(params);<br>} catch (InvalidRequestException e) {<br>  ApiErrorCode errorCode = e.getApiErrorCode();<br>  if (errorCode instanceof BadRequestApiErrorCode code) {<br>    if (code == BadRequestApiErrorCode.DUPLICATE_ENTRY) {<br>    // Handle duplicate - compiler verified this is a real error code<br>    }<br>  }<br>} catch (TransportException e) {<br>// Network-level errors: DNS failures, timeouts<br>// v3 didn&#39;t distinguish these from API errors. v4 does.<br>}</pre><p>Pattern matching on error codes. TransportException for network issues vs APIException for server errors. This is how error handling should work in a billing integration where reliability is non-negotiable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*A0BWoD7V9_JRcbzv9f3kSg.png" /></figure><p><strong>Billing gets complex quickly</strong></p><p>If you’ve ever tried to build billing in-house, you know the drill. It starts with a Payment gateway integration and a subscriptions table. Then someone asks for prorations. Then annual plans. Then usage-based pricing because your AI feature charges per API call. Then tax compliance for the EU expansion. Then dunning logic because credit cards expire and customers forget.</p><p>Suddenly your “simple billing module” is 15,000 lines of edge cases with ownership spread across multiple teams.</p><p>This is the context where managed billing platforms are useful. Chargebee handles subscription lifecycle, invoicing, tax, dunning, and revenue recognition; the SDK is the integration boundary for Java services.</p><p>What’s relevant to v4 specifically is the rise of usage-based billing. More teams are moving to consumption pricing — API calls, tokens, compute hours — which means the integration needs to handle high-volume event ingestion without becoming a bottleneck. v4’s async APIs and thread-safe client directly address that requirement.</p><h3>Migrating from v3: Core patterns</h3><p>The migration follows a consistent set of patterns. Here’s the short version:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nJ73-Ri_83ObZ3StRjA5xw.png" /></figure><p>The patterns are mechanical and consistent. We’ve also published a <a href="https://github.com/chargebee/chargebee-java/tree/v4">detailed migration guide</a> with a transformation prompt you can use with an LLM to automate much of the migration.</p><p><strong>Getting started</strong></p><p><strong>For existing v3 users</strong> — update your dependency and follow the migration guide. The type system will guide you; most v3 patterns have a direct v4 equivalent.</p><pre>&lt;dependency&gt;<br>  &lt;groupId&gt;com.chargebee&lt;/groupId&gt;<br>  &lt;artifactId&gt;chargebee-java&lt;/artifactId&gt;<br>  &lt;version&gt;4.5.0&lt;/version&gt;<br>&lt;/dependency&gt;</pre><p><strong>For new projects</strong> — add the dependency, create a ChargebeeClient, and start building. v4 requires Java 11+ and uses Gson for JSON processing (included transitively).</p><pre>implementation &#39;com.chargebee:chargebee-java:4.5.0&#39;</pre><p><strong>Engineering takeaways</strong></p><p>Billing code is long-lived. Once it’s in your codebase, it tends to remain for years. It touches payments, subscriptions, invoicing, and revenue, so it should be clean, type-safe, testable, and built on patterns that age well.</p><p>Chargebee Java SDK v4 focuses on one principle: <strong>billing integrations should follow the same engineering standards as the rest of your Java codebase.</strong></p><p>No more global state. No more untyped responses. No more pretending it’s still Java 6.</p><p>For teams on v3, the migration is mostly mechanical and provides clearer APIs, safer concurrency behaviour, and stronger type guarantees.</p><p><em>Reference: </em><a href="https://github.com/chargebee/chargebee-java"><em>Chargebee Java SDK on GitHub</em></a></p><p>Questions or feedback: <a href="https://discord.gg/nEtpvhqzG3">Discord</a> or <a href="mailto:dx@chargebee.com">dx@chargebee.com</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2a6f456ecf9d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/inside-chargebee-java-sdk-v4-a-practical-redesign-2a6f456ecf9d">Inside Chargebee Java SDK v4: A Practical Redesign</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How we rewrote the Chargebee Go SDK v4 from scratch]]></title>
            <link>https://medium.com/chargebee-engineering/how-we-rewrote-the-chargebee-go-sdk-v4-from-scratch-44ed09cf96e2?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/44ed09cf96e2</guid>
            <category><![CDATA[developer-experience]]></category>
            <category><![CDATA[go]]></category>
            <category><![CDATA[chargebee]]></category>
            <category><![CDATA[sdk]]></category>
            <dc:creator><![CDATA[Srinath Sankar]]></dc:creator>
            <pubDate>Fri, 30 Jan 2026 05:59:02 GMT</pubDate>
            <atom:updated>2026-01-30T05:59:01.316Z</atom:updated>
            <content:encoded><![CDATA[<p>An experiment in how to <a href="https://www.goodreads.com/book/show/170548.Don_t_Sweat_the_Small_Stuff_and_It_s_All_Small_Stuff">sweat the small stuff</a></p><figure><img alt="One does not simply rewrite an SDK" src="https://cdn-images-1.medium.com/max/651/1*jb9sNzrq1x4WD5HkdtHfzw.jpeg" /></figure><blockquote>TL;DR: We rewrote the <a href="https://github.com/chargebee/chargebee-go/tree/v4">Chargebee Go SDK</a> from scratch and <a href="https://pkg.go.dev/github.com/chargebee/chargebee-go/v4">v4</a> is now available for testing. Please join our <a href="https://discord.gg/nEtpvhqzG3">Discord</a> server to keep up with updates and to share your feedback.</blockquote><h3>Intro</h3><p>As the Chargebee API evolved over time, the current version of the chargebee-go SDK had started to show its scars. While it is functional and continues to be supported, it suffered from technical debt and design patterns that felt foreign to Go developers. Towards the end of 2025, it became increasingly clear from community feedback that it was time to modernize the library.</p><p>We embarked on an experiment to rewrite the SDK from scratch with a singular focus: Developer Experience. We wanted to build a library that felt like it belonged in the Go ecosystem, leveraging the language’s strengths and making it more idiomatic.</p><h3>Goals</h3><p>Before writing a single line of code, we established clear goals to address the pain points of the previous version, gathered from existing and potential customers via various channels like GitHub, Discord, Support teams, etc. We categorized them broadly, and the following goals started to emerge:</p><h4>Fix package layout</h4><p><strong>The Problem</strong>: In the previous version, related entities were often scattered across multiple sub-packages. To work with a particular resource, one typically had to import the resources, enums, actions, and global enums separately. This led to having multiple import aliases and a cluttered import block at the top of every source file. It also made it difficult to rely on the IDE to suggest which symbols to import. This seemed to confuse LLMs and coding agents as they did not have a predictable way of looking up the required symbols without the entire SDK taking up the context window.</p><p><strong>The Solution</strong>: We flattened the package layout. The chargebee package is your primary entry point and all resources are now available directly in the root package This structure is significantly more user-friendly, reducing the cognitive load of remembering where specific models or enums live.</p><pre>/// Before (v3)<br>import (<br>    &quot;github.com/chargebee/chargebee-go/v3&quot;<br>    subscriptionAction &quot;github.com/chargebee/chargebee-go/v3/actions/subscription&quot;<br>    subscriptionEnum &quot;github.com/chargebee/chargebee-go/v3/models/subscription/enum&quot;<br>    &quot;github.com/chargebee/chargebee-go/v3/models/subscription&quot;<br>    &quot;github.com/chargebee/chargebee-go/v3/enum&quot;<br>)<br><br><br>/// Now (v4)<br>import &quot;github.com/chargebee/chargebee-go/v4&quot;<br>// Everything is accessible via the main package</pre><h4>No global state</h4><p><strong>The Problem</strong>: The old SDK relied heavily on global methods and shared variables for configuration (like API keys). This made it difficult to manage multiple Chargebee sites within a single application and potentially introduce concurrency bugs.</p><p><strong>The Solution</strong>: All state is now encapsulated in a Client struct. Configuration options—such as custom HTTP clients, timeouts, and retry logic are applied on a per-client basis. This makes the SDK concurrency-safe and multi-tenant friendly by default.</p><figure><img alt="Client and Services Hierarchy" src="https://cdn-images-1.medium.com/max/1024/1*MhOc_tLiL8cu6NMZ_8TH-w.png" /><figcaption>Client Hierarchy</figcaption></figure><pre>/// Before (v3)<br>chargebee.Configure(&quot;{site_api_key}&quot;, &quot;{site}&quot;)<br><br><br>/// Now (v4)<br>// Initialize a client with specific config<br>usConfig := &amp;chargebee.ClientConfig{<br>    SiteName: &quot;{site_us}&quot;,<br>    ApiKey:   &quot;{us_api_key}&quot;,<br>}<br>usClient := chargebee.NewClient(usConfig)<br><br>// You can have a second client for a different region<br>euClient, err := chargebee.NewClient(&amp;chargebee.ClientConfig{<br>    SiteName: &quot;{site_eu}&quot;,<br>    ApiKey:   &quot;{eu_api_key}&quot;,<br>})</pre><h4>Strictly typed API responses</h4><p><strong>The Problem</strong>: Previously, developers had to deal with a generic Result object. Retrieving the actual data required nil-checking and type assertions, which defeated the purpose of using a statically typed language like Go.</p><p><strong>The Solution</strong>: Thanks to Go generics, we now have typed response objects for all requests. This makes is easier for IDEs to offer better suggestions and to ensure static type checks pass during development and when compiling the program. This also removes unnecessary noise when trying to identify which fields are actually available in a response without repeated nil checks.</p><pre>/// Before (v3)<br>var result chargebee.Result<br>result, err = subscriptionAction.Create(&amp;subscription.CreateRequestParams{}).Request()<br>// result has &gt; 100 fields without a clear indication of which ones are<br>// actually available for this request<br>if result.Subscription != nil &amp;&amp; result.Customer != nil {<br>    fmt.Println(&quot;%+v&quot;, result.Subscription, result.Customer)<br>}<br><br><br>/// Now (v4)<br>var res *chargebee.SubscriptionCreateResponse<br>res, err := client.Subscription.Create(&amp;chargebee.SubscriptionCreateRequest{})<br>// response is statically typed and contains these exported fields<br>fmt.Println(&quot;%+v&quot;, res.Subscription, res.Customer, res.Card, res.Invoice, res.UnbilledCharges)</pre><h4>Request consistency</h4><p><strong>The Problem</strong>: The old Request() and ListRequest() methods were often leaky abstractions that actually dispatched the API call and return the Result. It wasn&#39;t immediately clear where and how these were different.</p><p><strong>The Solution</strong>: The request object now encapsulates all required fields and options necessary for the API call. Requests are no longer dispatched when the .Request() method is called. Instead, each resource action expects the request object, which is automatically dispatched when the method is called. The distinction between a Request() and ListRequest() also has been done away with.</p><pre>/// Before (v3)<br>res, err := subscriptionAction.Create(&amp;subscription.CreateRequestParams{<br>    PlanId: &quot;cbdemo_grow&quot;,<br>}).Request()<br><br>res, err := subscriptionAction.List(&amp;subscription.ListRequestParams{<br>    Limit: chargebee.Int32(5),<br>}).ListRequest()<br><br><br>/// Now (v4)<br>res, err := client.Subscription.Create(&amp;chargebee.SubscriptionCreateRequest{<br>    PlanId: &quot;cbdemo_grow&quot;,<br>})<br><br>res, err := client.Subscription.List(&amp;chargebee.SubscriptionListRequest{<br>    Limit: chargebee.Int32(5),<br>})</pre><h4>Idiomatic error handling</h4><p><strong>The Problem</strong>: The old SDK had quite a few places where it would panic(). As we know, libraries that panic are generally frowned upon as they can crash the entire application. Developers had to know this limitation and then mitigate the risk by guarding the calling method using recover. Even though this was a known issue, it was not an easy fix without breaking changes to the SDK.</p><p><strong>The Solution</strong>: Almost all panics have been removed in favor of returning an error. The library now calls panic() only if there is a configuration error and the chargebee.NewClient() cannot instantiate a client with the given site name and API key.</p><h3>Our Process</h3><p>It was quite obvious from the get go that we will need to maintain a new major version of the SDK going forward. Hence, we went through a structured delivery process where we tried to address most if not all known issues with the current version. We went through these three distinct phases:</p><h4>Top-down API design</h4><p>We started with <em>Readme Driven Development</em>: documenting the current state and coming up with the equivalent “ideal” state. This was just the high level API design — creating a config, creating the client, invoking the service, creating a request object, etc. This included mock implementations of common workflows like creating a customer, checking out a subscription, handling a webhook, etc. We iterated on the syntax until it felt idiomatic and IDE friendly.</p><p>This phase was crucial for nailing the ergonomics of the SDK. Quite a few decisions were debated and iterated internally. For example, we decided to do away with the common Go pattern of passing a context.Context as the first parameter to all service methods. Contexts are useful in very limited scenarios and making them a default first parameter looked noisy and added cognitive load. Given the following options, which one would you rather choose?</p><pre>// Option 1: With context as first param, where it&#39;s mostly not used effectively<br>req := &amp;chargebee.SubscriptionListRequest{}<br>res, err := client.Subscription.List(context.TODO(), req)<br><br>// Option 2: With an optional method to set the context if required<br>req := &amp;chargebee.SubscriptionListRequest{}<br>req.SetContext(context.Background())<br>res, err := client.Subscription.List(req)</pre><p>Another example of obsessing over the small details: we did not want a New{Resource}Request wrapper to build each request type. This took a bit of a deep dive into how embedded structs work and how best to expose the additional methods on the request object without leaking abstractions.</p><pre>// Option 1: With an additional constructor for each request type<br>req := chargebee.NewSubscriptionListRequest(&amp;chargebee.SubscriptionListParams{...})<br><br>// Option 2: No additional function call<br>req := &amp;chargebee.SubscriptionListRequest{...}</pre><p>A key guiding principle at this stage was: “It’s OK to be opinionated”. Understanding the options in front of us, and choosing what <em>we think</em> is the best way forward helped us focus on the single best solution. Instead of providing two different ways of doing things which adds to complexity and confusion, we choose the most pragmatic way forward.</p><p>We also focused on internal consistency of the library. For example, if we are to make a raw request whose response type is unknown, we can call the internal send method with the following signature:</p><pre>res, err := send[*UnknownResponse](&amp;BlankRequest{}, config)</pre><p>P.S: We will eventually expose a RawRequest method using the above technique as an escape hatch, where a constructed http.Request can be sent to the API, and raw response made available for processing.</p><h4>Bottom-up implementation</h4><p>Once the desired user facing API was finalized, it was time to get to work. All our SDKs are built as a combination of hand crafted code which lay the foundations, and generated code which provide the required models and methods to invoke the API endpoints. Our <a href="https://github.com/chargebee/sdk-generator">sdk-generator</a> is responsible for parsing the <a href="https://github.com/chargebee/openapi">OpenAPI spec</a>, constructing the objects (resource, input and output schema, enums, etc) and their relationship and rendering the generated code for each target language.</p><p>Since our existing SDK had the low level implementation details, we decided to reuse them to save time and effort. This plumbing code isn’t exposed directly to the users and can be refactored without breaking the high level APIs. Hence, the main focus was around implementing the required apiRequest, apiResponse, ClientConfig and Client structs.</p><p>The sdk-generator was then updated to flatten the resource hierarchy, and write out the renamed structs and methods. Every resource (e.g. customer) will now have a customer.go file that contains all the model definitions, and a customer_service.go with the resource methods. This predictable layout makes it easy to look for a symbol manually and using LLMs/coding agents.</p><h4>Testing</h4><p>No major rewrite is complete without sufficient testing. One of the challenges we faced was a lack of unit tests in key methods that handle request serialization, response parsing, handling custom fields, etc. To refactor with confidence, we focused on writing tests first before breaking it, and then eventually fixing it with the reimplemented methods.</p><p>Some manual testing was also required to ensure the correctness of <a href="https://github.com/chargebee/chargebee-go/blob/v4/README.md#create-a-subscription-with-items">code snippets</a> that were migrated to use the new version. We will continue E2E testing while we are in the beta phase to maximize our chances of running into potential bugs.</p><h3>Next Steps</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*xpxFZEdctMr84t9a" /></figure><p>Now that the beta is out, we are turning to the community for help with testing and feedback. If you already use the chargebee-go/v3 library, please test out the new version with a particular attention to usability and correctness. We have a handy <a href="https://github.com/chargebee/chargebee-go/wiki/Go-SDK-v4-migration-guide">migration guide</a> to move from v3 to v4.</p><p><strong>Bonus round</strong>: To improve the readability of long enums (CreditNoteEstimateLineItemDiscountDiscountTypeDocumentLevelDiscou anyone?!), we are looking to do implement pseudo-namespacing in our generated code as shown below. It is analogous to how Go supports underscore separated numbers to improve readability (10_000_000 === 10000000). Let us know if you think this is a good idea!</p><pre>var CreditNoteEstimateEnum struct {<br>  LineItemDiscount struct {<br>    DiscountType struct {<br>      ItemLevelCoupon       CreditNoteEstimateLineItemDiscountDiscountType<br>      DocumentLevelCoupon   CreditNoteEstimateLineItemDiscountDiscountType<br>      PromotionalCredits    CreditNoteEstimateLineItemDiscountDiscountType<br>      ProratedCredits       CreditNoteEstimateLineItemDiscountDiscountType<br>      ItemLevelDiscount     CreditNoteEstimateLineItemDiscountDiscountType<br>      DocumentLevelDiscount CreditNoteEstimateLineItemDiscountDiscountType<br>    }<br>  }<br>}<br><br>func init() {<br>  CreditNoteEstimateEnum.LineItemDiscount.DiscountType.ItemLevelCoupon = CreditNoteEstimateLineItemDiscountDiscountTypeItemLevelCoupon<br>  CreditNoteEstimateEnum.LineItemDiscount.DiscountType.DocumentLevelCoupon = CreditNoteEstimateLineItemDiscountDiscountTypeDocumentLevelCoupon<br>  CreditNoteEstimateEnum.LineItemDiscount.DiscountType.PromotionalCredits = CreditNoteEstimateLineItemDiscountDiscountTypePromotionalCredits<br>  CreditNoteEstimateEnum.LineItemDiscount.DiscountType.ProratedCredits = CreditNoteEstimateLineItemDiscountDiscountTypeProratedCredits<br>  CreditNoteEstimateEnum.LineItemDiscount.DiscountType.ItemLevelDiscount = CreditNoteEstimateLineItemDiscountDiscountTypeItemLevelDiscount<br>  CreditNoteEstimateEnum.LineItemDiscount.DiscountType.DocumentLevelDiscount = CreditNoteEstimateLineItemDiscountDiscountTypeDocumentLevelDiscount<br>}</pre><p>Code is rarely “done”, so we will continue testing the SDK with different scenarios, and update the documentation so that everything is set for a stable release. The more feedback we receive, the quicker we can fix any gaps before it’s ready for production use.</p><blockquote>As always, you can join our <a href="https://discord.gg/nEtpvhqzG3">Discord</a> server or send us an email at <a href="mailto:dx@chargebee.com">dx@chargebee.com</a> to share your feedback.</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=44ed09cf96e2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/how-we-rewrote-the-chargebee-go-sdk-v4-from-scratch-44ed09cf96e2">How we rewrote the Chargebee Go SDK v4 from scratch</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Chargebee CPQ : Evolution from Product Catalog to CPQ]]></title>
            <link>https://medium.com/chargebee-engineering/chargebee-cpq-evolution-from-product-catalog-to-cpq-203d26c85d79?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/203d26c85d79</guid>
            <category><![CDATA[composable-architecture]]></category>
            <category><![CDATA[chargebee]]></category>
            <category><![CDATA[cpq]]></category>
            <category><![CDATA[product-catalog]]></category>
            <category><![CDATA[billing]]></category>
            <dc:creator><![CDATA[Ahad Syed]]></dc:creator>
            <pubDate>Mon, 18 Aug 2025 10:09:53 GMT</pubDate>
            <atom:updated>2025-08-19T11:18:14.969Z</atom:updated>
            <content:encoded><![CDATA[<h3>Chargebee CPQ: Evolution from Product Catalog to CPQ</h3><p>During a press and analyst <a href="https://www.microsoft.com/en-us/windows-server/blog/2015/05/06/microsoft-loves-linux/">briefing</a> in March 2015, one of the iconic moments in the software industry, Microsoft CEO Satya Nadella presented a slide that read, “Microsoft loves Linux.” He also stated that he wasn’t interested in fighting old battles, especially when Linux had become a vital part of modern business technology. “If you don’t jump on the new,” he said, “you don’t survive.” Microsoft long held the belief that Windows OS was central to its ecosystem and viewed Linux as a rival. Thus, resisting the broader industry shift toward open-source adoption.</p><p>The story at Chargebee wasn’t all that different from that of Microsoft. We traditionally considered the product catalog and the selling rules to be tightly coupled with the billing system, making it difficult to envision any other source of truth for catalog data. This perspective positioned us more as competitors to CPQ systems, rather than collaborators.</p><p>In Spring 2025, Chargebee launched a suite of <a href="https://www.chargebee.com/cpq/">CPQ products</a>, which included:</p><ul><li>a native CPQ offering for customers who can have their quoting, pricing and guided selling within Chargebee Billing System.</li><li>Chargebee CPQ in a CRM platform of your choice</li><li>Bring Your Own CPQ.</li></ul><p>With this launch, Chargebee also lets merchants bring any CPQ of their choice and integrate it with Chargebee Billing. In this post, we will cover the technical details that made this possible. With a composable architecture, we’ve now decoupled catalog workflows from billing workflows, enabling seamless integration with any CPQ system. This transforms us from a standalone solution into a more interoperable platform for any CPQ to integrate.</p><p>But before diving into those details, let’s first understand the importance of the product catalog in a billing system.</p><h3>Product Catalog Data Model</h3><p>A product catalog defines the SKUs that a business wants to sell and also includes the associated price points. A product is critical to the functioning of a billing system, serving as the configuration for price points and selling rules used to create a sales order within the billing system. For a billing system, a product catalog acts as a template created by the merchant to generate specific subscriptions for customers.</p><p>Chargebee’s product catalog data model for billing is inherently a flat data model. This allows the billing system to invoice flexibly and structure items in a way that favors invoicing and accounting systems. The items in the catalog are essentially converted into a one-to-one mapping with invoice line items. Therefore, the data model for the product catalog favors a flat structure that enables better invoicing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1021/0*ZNun7pAlev-hfNnO" /><figcaption>Chargebee Billing with the various components</figcaption></figure><p>Below is a representation of the flat data model used in Chargebee Billing. The items and their prices across different frequencies and currencies are a flat list of prices for each of the items.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*FjiJIBOqcdg-DAXI" /></figure><p>A flat model leads to catalog bloat and creates management overhead in handling the flat list of item prices. It also included some built-in selling rules applied when a subscription was created. This tight coupling between the selling rules and the billing system had to be decoupled to enable a truly flexible integration of any CPQ with the billing system.</p><p>To solve the Catalog bloat issue in Product Catalog 1.0, Chargebee introduced <a href="https://www.chargebee.com/docs/billing/2.0/product-catalog/product-catalog">Product Catalog 2.0</a> to enable better management of the product catalog. This model brought structure to overall catalog management. Item prices could now be grouped under items — each item acting as a container for prices of the same product with different selling frequencies and currencies. These items are further grouped into item families, allowing similar products sold together to be organized under the same item family.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*qy90jXgn1lnzmaLk" /></figure><p>Product Catalog 2.0 introduces a more structured and flexible system. It organizes catalog items more clearly and separates item configuration and selling rules from pricing. This means you can now manage how items are sold independently of how they’re priced, giving you greater control and customization over selling options.</p><p>Additionally, Product Catalog 2.0 introduced differential pricing capabilities to set bundled prices for products when sold together or with specific plans. Product Catalog 2.0 also enables the configuration of selling rules for plans, add-ons, and any one-time charges. The collective nature of these selling rules gives CPQ-like capabilities to the product catalog.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/578/1*CIg6kVWlwn3P1GIv4EYDeg.png" /><figcaption>Chargebee Billing composable architecture</figcaption></figure><p>The first major evolutionary step was to decouple the product catalog. The realization was that the catalog needed to exist as an independent, standalone service with its own API. This seemingly simple architectural change had profound implications. By decoupling the product catalog, it could now serve as the single source of truth for multiple systems. This was the key that unlocked the door for a more sophisticated sales and billing motion. The most immediate and impactful integration was with CPQ (Configure, Price, Quote) systems.</p><p>Sales teams live in their CRM and CPQ tools. They need to create complex quotes, apply discounts, and structure deals without ever touching the billing system. With a decoupled catalog, the CPQ system could now pull products, pricing, and rules directly from the catalog API. This ensured that the deal structured by the sales team was perfectly aligned with what the billing system could actually execute. The result? A seamless handover from sales to billing, eliminating manual data entry and costly errors.</p><h3>Introducing the SalesOrder Interface</h3><p>Decoupling the product catalog and integrating a CPQ was just the beginning. This decoupled structure dramatically reduces complexity and makes it significantly faster to launch in new markets or experiment with pricing strategies. The true transformation is the move to a fully composable, headless architecture. In this model, the core billing engine becomes a powerful, orchestrating hub, connecting to a variety of specialized, best-of-breed services via API.</p><p>To create a truly fully composable interface between the sales and billing processes, we introduced a crucial intermediary: the <a href="https://apidocs.chargebee.com/docs/api/sales_orders"><strong>SalesOrders interface</strong></a> to separate the quoting process from the billing process. The Sales Order interface establishes a clear communication channel between the CPQ/Catalog system and the Billing system.</p><p>The Sales Order interface captures all the necessary information required for the billing system to initiate invoicing workflows. There are two ways a Sales Order can be created:</p><ul><li>Sales-Led workflows: Once a quote is created and accepted in the CPQ system, the CPQ initiates the workflow to create the Sales Order.</li><li>Checkout workflows: Upon cart payment, the checkout page generates the Sales Order object, which then triggers the downstream billing workflows.</li></ul><p>The Sales Order interface clearly delineates the boundary between the Product Catalog and Selling System on one side, and the Billing System on the other. This decoupling allows the Billing System to integrate with any product catalog that includes more complex selling rules — allowing merchants to bring their own CPQ when integrating with Chargebee Billing.</p><p>In revenue recognition (<a href="https://www.chargebee.com/docs/revrec/getting-started-with-revrec/about-revrec">RevRec</a>), the Sales Order initiates the revenue recognition process by creating a revenue schedule in accordance with accounting standards. It includes performance obligations, allocates revenue, sets up deferred revenue, and automates expense workflows — ensuring accurate revenue recognition and compliance with ASC 606 and IFRS 15.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*i-MrhSZ8805mgk5S" /></figure><h3>Ensuring Backward Compatibility</h3><p>A key consideration in decoupling the Product Catalog from the Billing system was to ensure that existing customer workflows and assumptions remained unaffected. <a href="https://www.hyrumslaw.com/">Hyrum’s Law</a> played a critical role in guiding how we maintained consistency, not only in the APIs but also in the UI. This meant that all API test automation for legacy workflows had to continue functioning exactly as expected. Our robust automation suite was instrumental in enabling this decoupling while preserving backward compatibility. It ensured that any changes introduced did not break existing functionality, providing a safety net for continued reliability.</p><p>To further safeguard backward compatibility, we proactively mitigated any risks to existing users, ensuring there was no downtime or disruption to their billing workflows. As a result, customers migrating between CPQ systems on the Chargebee Billing continue to experience consistent and reliable billing and invoicing.</p><h3>Summary:</h3><p>Chargebee’s architectural journey has evolved from a tightly coupled monolithic system to a modern, composable, and decoupled ecosystem built for flexibility and scale. This transformation was driven by the growing diversity in merchant needs across industries, each requiring nuanced control over catalog configuration, pricing, and quoting.</p><p>Today, Chargebee empowers organizations with the freedom to choose the CPQ solution that best fits their sales workflow — whether native to Chargebee or integrated with best-in-class platforms like Salesforce, HubSpot, Conga, or DealHub. By decoupling the product catalog from the billing engine and introducing the Sales Order interface, we’ve made CPQ interoperability seamless. It is a fundamental shift in how Revenue Growth Management can operate at scale. CPQ workflows are now a native part of the Chargebee DNA, ensuring alignment between sales intent and billing execution without friction or compromise.</p><p>A composable architecture unlocks extensible CPQ capabilities with Chargebee Billing.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=203d26c85d79" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/chargebee-cpq-evolution-from-product-catalog-to-cpq-203d26c85d79">Chargebee CPQ : Evolution from Product Catalog to CPQ</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Chargebee’s Node.js SDK Gets a (Long-Due) Makeover]]></title>
            <link>https://medium.com/chargebee-engineering/chargebees-node-js-sdk-gets-a-long-due-makeover-2b58a5dde632?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/2b58a5dde632</guid>
            <category><![CDATA[chargebee]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[nodejs]]></category>
            <dc:creator><![CDATA[Sriram Thiagarajan]]></dc:creator>
            <pubDate>Mon, 06 Jan 2025 08:44:25 GMT</pubDate>
            <atom:updated>2025-01-06T11:19:59.244Z</atom:updated>
            <content:encoded><![CDATA[<p><em>tl;dr: Chargebee Node.js SDK has a new, unified update: Version 3. Rewritten from the ground up with cleaner APIs, robust TypeScript support, and compatibility with latest JavaScript runtimes; whether you’re configuring the SDK, making API calls, or handling custom headers, v3 has everything you’ll need to code better.</em></p><p>The <a href="https://github.com/chargebee/chargebee-node">Chargebee Node.js SDK</a> has been around for more than a decade. It nets 100K combined weekly downloads. But it just hasn’t kept pace with the fast-moving JavaScript ecosystem.</p><p>New JavaScript runtimes have appeared in recent years. All built around goals of delivering speed, well-optimized APIs, and a more complete DX. Goals that we’ve also made central for this new release.</p><p>Then serving <em>two </em>separate SDKs — one for Node.js, one for TypeScript — has only bred confusion. Especially as both those SDKs relied on some outdated dependencies, inconsistent TypeScript support, and verbose workflows.</p><p>So when we asked ourselves: What should a great SDK look like, today?</p><p>The answer couldn’t be a patch or a tweak.</p><p>It had to be a rewrite.</p><h3>The big change: One unified SDK</h3><p>This new version:</p><ul><li>Supports both ESM and CJS module systems</li><li>Runs smoothly on Bun, Deno, Cloudflare Workers, and other modern runtimes</li><li>Leverages native features such as fetch and Promise</li></ul><p>This also means that the <a href="https://www.npmjs.com/package/chargebee-typescript">chargebee-typescript</a> package is getting deprecated. But anyone using it will have ample time (and support from us) to <a href="https://github.com/chargebee/chargebee-node/wiki/Migration-guide-for-v3">migrate</a> to the new version.</p><p>Let’s look at just some of the quality-of-life improvements that v3 delivers.</p><h3>What’s new?</h3><h4>Idiomatic, modern JavaScript</h4><p>The new SDK just <em>works</em>. Here’s how:</p><ul><li>Stronger TypeScript inference that reduces errors and guesswork</li><li>Function parameters now use camelCase instead of snake_case</li><li>Native async/await is supported for cleaner syntax</li><li>The ability to create multiple client instances for more complex use cases</li></ul><h4>Less boilerplate, more flow</h4><p>This update significantly tackles verbosity, improving readability across the board. For example:</p><ul><li>You no longer need to append .request() to every API call.</li><li>Efficient client setup — functions like .configure() and .updateTimeoutMillis() are now consolidated into simpler configuration parameters.</li></ul><h4>(Much) better typings</h4><p>Version 3 also has great TypeScript coverage that helps with writing correct code faster and catching issues much earlier in the development cycle.</p><p>For example: <a href="https://apidocs.chargebee.com/docs/api/advanced-features#custom_fields">custom fields</a> in Chargebee always begin with cf_</p><p>With the new SDK, if you try to name the custom field something else, TypeScript <em>will</em> point it out with that beautiful, signature squiggly line 😄</p><figure><img alt="An example of passing a custom field when creating a customer in Chargebee" src="https://cdn-images-1.medium.com/max/490/0*HIbf0QCEGmUB5E5U" /><figcaption>An example of passing a custom field when creating a customer in Chargebee.</figcaption></figure><p>The type inference for response is far better as well.</p><p>The screenshot below shows a code sample for creating a customer in Chargebee in the old version of the SDK. If you look closer, it returns a generic customer object.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/579/0*SQoPxmdKSzUCR4Fr" /><figcaption>A code sample that illustrates the creation of a customer in Chargebee using the older version of the SDK. Note that the response doesn’t have proper TypeScript definition.</figcaption></figure><p>Compare this with the same sample in the new SDK version below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/543/0*mJyyqQhXNVvSAF4x" /><figcaption>A code sample that illustrates the creation of a customer in Chargebee using the new version of the SDK. Note that the response has proper TypeScript definition and is inferred correctly.</figcaption></figure><h3>What’s next?</h3><p>We plan to improve the SDK experience further with more contextual documentation and the ability to use your own HTTP client for added flexibility. We’re not stopping there. This release sets a high standard for the other SDK updates we’re starting to work on.</p><p>If you’re a developer building with Chargebee, there are more exciting things coming your way!</p><p>A note on how we build: <br>We auto-generate <a href="https://apidocs.chargebee.com/docs/api/#client_library">client libraries</a> in seven different programming languages using a custom code generator (curious about how that works? Leave a comment, we’d love to share 😉) based on our <a href="https://github.com/chargebee/openapi">OpenAPI spec</a>.</p><p>If you have any feedback/comments/questions, please feel free to reach out to dx[at]chargebee[dot]com.</p><p>— Chargebee DX Team</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2b58a5dde632" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/chargebees-node-js-sdk-gets-a-long-due-makeover-2b58a5dde632">Chargebee’s Node.js SDK Gets a (Long-Due) Makeover</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Streamlining Tax Integrations with Tax SPI]]></title>
            <link>https://medium.com/chargebee-engineering/streamlining-tax-integrations-with-tax-spi-46298cb30031?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/46298cb30031</guid>
            <category><![CDATA[taxes]]></category>
            <category><![CDATA[spi]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[system]]></category>
            <category><![CDATA[integration]]></category>
            <dc:creator><![CDATA[Surya Kannapiran]]></dc:creator>
            <pubDate>Fri, 18 Aug 2023 12:48:53 GMT</pubDate>
            <atom:updated>2023-08-18T12:53:04.315Z</atom:updated>
            <content:encoded><![CDATA[<p>At Chargebee, we’re at the forefront of revenue growth management for merchants worldwide. A core aspect of our service is our sophisticated tax system, which shoulders a myriad of responsibilities to ensure smooth and compliant financial operations for subscription businesses. These responsibilities include:</p><ul><li>Estimating taxes applicable on an invoice.</li><li>Calculating country-specific taxes.</li><li>Creating tax profiles for distinct product or service groups that fall under varying tax rates and compliance measures.</li><li>Managing tax exemptions for specific customers.</li><li>Validating customer shipping addresses both for accurate tax calculation and product delivery purposes.</li><li>Submitting invoice and credit note documents for precise tax reconciliation.</li></ul><p>With businesses spanning across borders, reconciling the numerous global tax rules with recurring invoicing becomes a challenging task. To manage this complexity, we’ve refined Chargebee’s tax system through three stages of innovation. Let’s walk through them:</p><h3>Stage 1: In-House Solutions</h3><p>The foundation of our tax system began as an in-house module within our core billing software. Merchants today still have the option of this built-in capability, packed with features like multi-region support, tax exemption management, and detailed tax reports.</p><p>Let’s peek into the architecture of this initial system. Merchants connect with Chargebee Billing either through our user interface or APIs. For clarity, the diagrams here focus on the API route:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*DoCatlf8FIw6DDNT" /><figcaption>Stage 1: System overview.</figcaption></figure><p>Going further into the Chargebee container, we reveal the dedicated tax module, which handles all tax-specific tasks:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*yjF-yJDG9bKNVHKM" /><figcaption>Stage 1: Tax module, built in-house, added to the Chargebee container.</figcaption></figure><p>Over time, we noticed many merchants opting for third-party tax services. While our in-house solution was comprehensive, it became essential to cater to our merchants’ diverse requirements.</p><h3>Stage 2: Integrating with Third-Party Tax Providers</h3><p>To address the growing needs, we expanded our horizon and integrated with third-party tax providers like Avalara and TaxJar. The diagram below illustrates this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*TK-8DGKLLPyA-4lW" /><figcaption>Stage 2: System overview.</figcaption></figure><p>The following diagram displays the new internal components that were added to Chargebee Billing as part of the integration: the client modules for Avalara and TaxJar.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*mFZxDr3Nmlk07xwH" /><figcaption>Stage 2: Client modules for tax service providers added to the Chargebee Billing container.</figcaption></figure><p>Yet, these third-party integrations brought along their own set of challenges, especially in managing integration complexities. Given our vast merchant base, it became evident that a single engineering team building these integrations wouldn’t scale.</p><h3>Stage 3: The Dawn of Tax SPI</h3><p>The problem was clear: How do we enable merchants, tax service providers, or independent software vendors (ISVs) to build integrations without being wholly dependent on Chargebee?</p><p>Enter the Tax SPI.</p><p>We introduced an API contract that sets the expectations for how our Billing module would interact with a tax provider. Any tax service provider keen on integrating with Chargebee can now align with our SPI, creating an <a href="https://en.wikipedia.org/wiki/Adapter_pattern#Definition">adapter</a>, compliant with the <a href="https://chargebee.atlassian.net/wiki/x/BoAeFw">SPI’s specifications</a>.</p><p>Case in point, the <a href="https://marketplace.chargebee.com/details/Anrok">Anrok Adapter</a> was the first to be crafted using the Tax SPI. Merchants can now effortlessly integrate with Anrok via the <a href="https://marketplace.chargebee.com/browse/categories/tax-management">Chargebee Marketplace</a>. The addition of Anrok’s self-made adapter into our Marketplace stands as a testament to the SPI’s success. Following this, we developed an adapter for another tax provider, <a href="https://www.chargebee.com/docs/vertex.html">Vertex</a>, which is currently in early access.</p><p>With the Tax SPI, we’re not just forging a path for streamlined integrations; we’re championing flexibility. While we continue to build and host integrations for strategic tax providers like Vertex, we also empower other tax providers and ISVs to self-serve, allowing them to build and host their own integrations with Chargebee.</p><p>All future tax service integrations will adopt this design.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*exEBFXDBFl2S38lP" /><figcaption>Stage 3: System overview. (Note that “Vertex adapter” is depicted as “internal” because it is built and hosted by Chargebee.)</figcaption></figure><p>In the diagram below, we show how the Tax SPI restructures Chargebee Billing’s internals, eliminating the need for a dedicated client library for each tax provider. Instead, the same SPI client library is used to connect to all new tax service provider integrations via their respective adapters.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ZXQja9bqkVoPhA3B" /><figcaption>Stage 3: The SPI client library added to the Chargebee Billing container.</figcaption></figure><h3>Implementing the Tax SPI</h3><p>For vendors implementing tax service adapters using our SPI, the process can be summed up in a few steps:</p><ol><li><strong>Sharing &amp; Implementation</strong>: We share the Tax SPI and a configuration template with the third-party vendor. Subsequently, the vendor implements their adapter app in alignment with the SPI. Once this is achieved, they register their adapter app on the Chargebee Marketplace, complete with configuration details, ranging from identity configurations to <a href="https://chargebee.atlassian.net/wiki/spaces/PCV/pages/389218305/Tax+SPI#SPI-Capabilities">supported capabilities</a>.</li><li><strong>Validation &amp; Integration</strong>: The configuration undergoes rigorous validation before being loaded into Chargebee. Once the adapter is implemented, we run pre-written <a href="https://learning.postman.com/docs/writing-scripts/test-scripts/">Postman tests</a> to evaluate the adapter, sharing any discrepancies for rectification. Furthermore, we have an integration suite assessing end-to-end workflows.</li><li><strong>Dynamic Integration</strong>: Any modifications to the vendor configuration are dynamically reflected on the vendor app’s screen within the <a href="https://app.chargebee.com/">Chargebee Billing app</a>. This eliminates the need for additional development work on our end.</li><li><strong>Merchant Onboarding</strong>: After ironing out any issues from the regression and integration suite, we onboard merchants with this new tax vendor integration via the Marketplace.</li></ol><p>Keen to delve deeper? Check out Chargebee’s <a href="https://chargebee.atlassian.net/wiki/x/AQAzFw">Tax SPI documentation</a>.</p><h4>Technical review, edits, and diagrams</h4><p><a href="https://medium.com/u/26556d69b983">John Machan</a>, Staff Technical Writer, Chargebee</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=46298cb30031" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/streamlining-tax-integrations-with-tax-spi-46298cb30031">Streamlining Tax Integrations with Tax SPI</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Policy Gate for DevSecOps using Open Policy Agent]]></title>
            <link>https://medium.com/chargebee-engineering/building-policy-gate-for-devsecops-using-open-policy-agent-999dd734744a?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/999dd734744a</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[devsecops]]></category>
            <category><![CDATA[open-policy-agent]]></category>
            <dc:creator><![CDATA[Nikhil Mittal]]></dc:creator>
            <pubDate>Wed, 30 Nov 2022 08:50:05 GMT</pubDate>
            <atom:updated>2022-11-30T08:50:05.811Z</atom:updated>
            <content:encoded><![CDATA[<p>In our <a href="https://medium.com/chargebee-engineering/building-appsec-pipeline-for-continuous-visibility-d430beb0a78f">last blog</a>, we detailed our approach to building a continuous application security pipeline with the objective of providing centralized visibility of the overall security posture of production touching repositories using open-source tools.</p><p>With this implementation, our security workflow does not fail even if the tools detected any vulnerabilities in the scans. This was a founding stone to our shift left initiative</p><p>Next, to drive the adoption of our security workflow to individual production touching repositories in Chargebee, we decided to build a security policy engine using <a href="https://www.openpolicyagent.org/">OPA</a> to control the data produced by open-source tools to act as a gatekeeper so we can effectively control the success or failure of security workflow along with managing exceptions.</p><p>This is how the current security workflow looks like</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/723/1*Xb4yHurtw4rni4HYV1zLuA.png" /></figure><p>The policy engine evaluates data produced by the open-source tools that we use</p><ul><li>SAST -&gt; SemGrep</li><li>SCA -&gt; OWASP Dependency Checker</li><li>Secret Scanning -&gt; GitLeaks</li><li>SBOM -&gt; CycloneDx</li></ul><p>All these tools produce JSON formatted report which is combined together using <a href="https://stedolan.github.io/jq/">JQ</a> to feed them to the policy engine</p><h3>Security Policy Engine</h3><p>The security policy engine starts with <em>default.rego</em> which checks if the overall count of violations is Zero or not.</p><pre>package cb.devsecops.policy<br><br>default allow = false<br><br>allow {<br>  count(violations) == 0<br>}</pre><p>Violations are defined separately for each tool we use in the security pipeline. Let’s consider an example of SemGrep policy implementation. For SemGrep we have <em>sast.rego</em> file created in the policy engine where it initially checks for all the issues against a configured severity for instance if we decide to fail SemGrep only for <strong>ERROR</strong> categorized issues we can define this severity in the configuration file and the same does for other tools like SCA we can define to fail only on <strong>CRITICAL</strong> or <strong>HIGH</strong> severity issues.</p><p>This is what a sample SAST policy looks like:</p><pre>package cb.devsecops.policy<br><br>import future.keywords.every<br>import future.keywords.in<br><br>violations[{&quot;message&quot;: msg, &quot;code&quot;: code}] {<br>  issue = input.semgrep.results[_]<br>  issue.extra.severity in data.config.semgrep.fail_on_severities<br>  not semgrep_exempted(issue)<br><br>  msg := &quot;SAST result have issues with WARNING or ERROR&quot;<br>  code := &quot;sast_fail&quot;<br>}<br><br>semgrep_exempted(issue) {<br>  exceptions := data.config.exceptions.semgrep<br><br>  exceptions[_].attributes.fingerprint = issue.extra.fingerprint<br><br>  # Support glob so that we can use patterns like &#39;javascript.**&#39;<br>  glob.match(exceptions[_].attributes.check_id, [&quot;.&quot;], issue.check_id)<br>  glob.match(exceptions[_].attributes.path, [&quot;/&quot;], issue.path)<br>}</pre><p>Once the initial check for severity is completed it moves toward exception management. Producing false positive issues with security scans is a prevalent scenario.</p><p>To deal with this situation we have defined a way to add exceptions for each tool. So the next check is to see if the flagged issue is configured as an exception or not.</p><p>For example, if the policy engine catches a HIGH severity issue and it’s made as an exception, the security workflow won’t fail. And if the issue is failing on severity and not configured in exception our security workflow will fail. Let’s consider an example of exception management in the policy engine</p><pre>{<br>    &quot;exceptions&quot;: {<br>        &quot;semgrep&quot;: [<br>            {<br>                &quot;reason&quot;: &quot;This is a example exception criteria&quot;,<br>                &quot;attributes&quot;: {<br>                    &quot;check_id&quot;: &quot;example-check-id&quot;,<br>                    &quot;fingerprint&quot;: &quot;example-fingerprint&quot;,<br>                    &quot;path&quot;: &quot;/target/example/path/file.ts&quot;<br>                }<br>            }<br>        ],<br>        &quot;gitleaks&quot;: [<br>            {<br>                &quot;reason&quot;: &quot;This is a example exception criteria&quot;,<br>                &quot;attributes&quot;: {<br>                    &quot;StartLine&quot;: 21,<br>                    &quot;EndLine&quot;: 21,<br>                    &quot;File&quot;:&quot;/target/example/path/file.ts&quot;,<br>                    &quot;Commit&quot;:&quot;&lt;commitSHA&gt;&quot;<br>                }<br>            }<br>        ],<br>        &quot;odc&quot;: [<br><br>        ]<br>    }<br>}</pre><p>Once the policy engine is integrated with the security workflow this is how it looks in action</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Qxf1tyIDgE3K21DQbMLDFw.png" /></figure><h3>Closing note</h3><p>With this tooling, at Chargebee we intend to drive policy as code adoption and encourage security policies to be codified in the pipeline.</p><p>Possibilities of adopting Policy as Code are limitless. Any change in security policies will mean we need to change only the policy code instead of the rest of the system. It also ensures the separation of concerns between tools developers, integrators, and policymakers.</p><p>For comments or feedback, you can get in touch with me over <a href="https://twitter.com/c0d3G33k">Twitter</a> 😀</p><p>If you are interested in our work and want to solve complex problems in SaaS products, platform &amp; cloud infrastructure engineering — <a href="https://www.chargebee.com/careers/engineering-culture/">we are hiring</a>!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=999dd734744a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/building-policy-gate-for-devsecops-using-open-policy-agent-999dd734744a">Building Policy Gate for DevSecOps using Open Policy Agent</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building AppSec Pipeline for Continuous Visibility]]></title>
            <link>https://medium.com/chargebee-engineering/building-appsec-pipeline-for-continuous-visibility-d430beb0a78f?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/d430beb0a78f</guid>
            <category><![CDATA[devsecops]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[appsec]]></category>
            <dc:creator><![CDATA[Nikhil Mittal]]></dc:creator>
            <pubDate>Thu, 21 Jul 2022 12:46:44 GMT</pubDate>
            <atom:updated>2022-07-21T12:46:44.191Z</atom:updated>
            <content:encoded><![CDATA[<p>Most SaaS organizations need high-velocity engineering with multiple releases in a day where security &amp; engineering teams are disproportionately scaled and no one likes to be blocked by other teams. So the AppSec industry is shifting left. This means the conventional security testing as a pre-release activity is not effective anymore in a fast-paced continuous environment. In this blog, we will explain our approach to building an application security pipeline for continuous security scanning using free and open-source tools for <a href="https://en.wikipedia.org/wiki/Static_application_security_testing">SAST</a>, <a href="https://en.wikipedia.org/wiki/Dynamic_application_security_testing">DAST</a>, <a href="https://owasp.org/www-community/Component_Analysis">SCA</a>, Secrets Scanning, and <a href="https://owasp.org/www-project-cyclonedx/">SBOM</a> generation.</p><p>The objective of this initiative is to provide centralized visibility of the overall security posture of various production touching components within the organization. This is a stepping stone for establishing a shared security responsibility culture by providing continuous and automated visibility.</p><h3>Overall Architecture</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*G2QjTEXMADrruMkZn4XlQg.png" /></figure><h3>Self-serve Security Solutions</h3><p>All of our security solutions are deployed as independent containers on AWS ECR so they can be pulled directly from ECR to integrate into any workflows or can be used locally by developers.</p><p>We use the following open-source tools for security scanning</p><ul><li>SAST → SemGrep</li><li>SCA → OWASP Dependency Checker</li><li>Secret Scanning → Gitleaks</li><li>SBOM → CycloneDx</li></ul><p>To drive self-service and ease of usability &amp; adoption we wrote a custom wrapper on top of these tools. So our users need not worry about the underlying implementation. This gives us additional control of pushing custom rules any time and have the pipeline apply the rule on all repositories.</p><p>For example, the SAST solution asks for a few user inputs and then applies appropriate scan profiles and custom rules based on the input. This makes it easier for our users to use it without worrying about the backend implementation, like what custom rules to use, which SemGrep profiles to use, etc. It also helps us to replace the underlying tool without disturbing the existing implementation.</p><h3>GitHub Repository Architecture</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*z6D5yJ6PJOnjaX7wgnKlUQ.png" /></figure><p>To achieve continuous security scanning without depending on any team we decided to</p><ul><li>Create an AppSec repository that has a main reusable workflow that can be used to scan any repository we wanted to scan using the standard configuration defined in security-workflow.yml</li><li>For example, if we want to scan a dummyrepository then we create a YML file with the repo name which uses our reusable workflow and on a nightly basis</li><li>As it triggers the security-workflow.yml file that clones the target repository which is dummy.yml and it starts scanning and archiving the report to the S3 bucket.</li></ul><h3>Security Data Visualization</h3><p>The data ingestion process is complete once it is archived in the S3 bucket. The next phase is to visualize the data stored in the S3 bucket.</p><p>To solve the visualization in a near real-time manner, our Lambda function triggers on S3 PUT event and checks the relevant data, and sends it to the security dashboard.</p><h4>DefectDojo</h4><p>DefectDojo has inbuilt parser support for SemGrep, Gitleaks, and ODC. So all the results from these tools go to the DefectDojo dashboard via the lambda function</p><h4>DependencyTrack</h4><p>In the DependencyTrack, the results from the software bill of material (SBOM) are stored. DependencyTrack has inbuilt functionality to detect vulnerabilities in different components used in the application and hence it can be used to identify</p><ul><li>List of the components/ libraries used in the application (Inventory management)</li><li>Vulnerabilities in used components/libraries</li><li>Organizational license policy violation detection</li></ul><h3>Closing note</h3><p>As a result, our security workflows can be integrated within GitHub Actions and can be used for PR level scanning as well and that is what is driving our shift left initiative.</p><p>For comments or feedback, you can get in touch with me over <a href="https://twitter.com/c0d3G33k">Twitter</a> 😀</p><p>If you are interested in our work and want to solve complex problems in SaaS products, platform &amp; cloud infrastructure engineering — <a href="https://www.chargebee.com/careers/engineering-culture/">we are hiring</a>!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d430beb0a78f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/building-appsec-pipeline-for-continuous-visibility-d430beb0a78f">Building AppSec Pipeline for Continuous Visibility</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Save cost by running GitHub Actions on Spot Instances inside an autoscaled EKS Cluster]]></title>
            <link>https://medium.com/chargebee-engineering/save-cost-by-running-github-actions-on-spot-instances-inside-an-eks-cluster-342f02ee2320?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/342f02ee2320</guid>
            <category><![CDATA[github-actions]]></category>
            <category><![CDATA[ci-cd-pipeline]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[aws-eks]]></category>
            <dc:creator><![CDATA[Shivam Agarwal]]></dc:creator>
            <pubDate>Fri, 08 Jul 2022 10:38:25 GMT</pubDate>
            <atom:updated>2022-07-09T18:20:58.531Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*AozBtcFt55OWrkIq" /><figcaption>Photo by <a href="https://unsplash.com/@micheile?utm_source=medium&amp;utm_medium=referral">micheile dot com</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><h3>Introduction</h3><p><a href="https://github.com/features/actions">GitHub Actions</a> is a very useful tool for implementing developer workflows such as CI/CD (Continuous Integration and Continuous Delivery) pipelines for your application. By default, GitHub Actions jobs are run in the cloud, on machines that are hosted and managed by GitHub.</p><h4>Self-hosted Runners</h4><p>However, sometimes, you may want to run your GitHub Actions jobs in your own machine. One reason could be that GitHub-hosted machines do not have the minimum hardware resources required to run your app. For such cases, GitHub gives you the option to use <a href="https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners">Self-hosted runners</a>. As the name suggests, self-hosted runners are machines which are hosted by you and are capable of running GitHub Actions. However, it requires provisioning and configuration of virtual machine instances to set up these self-hosted runners (Problem A).</p><h4>Kubernetes Cluster</h4><p>Containerising apps and deploying them inside Kubernetes cluster is also very common these days. If you already have a Kubernetes cluster, it makes more sense to run self-hosted runners on top of it. Also, self-hosted runners should be able to automatically scale up/down based on the demand for running tests. For example, if two different developers push code to the same branch few seconds apart, the second developer should not have to wait for the first developer’s workflows to finish execution before second developer’s workflows start. Rather, a second runner should be scaled up automatically to serve the second developer. Also, this second runner should scale down automatically when there are no further workflow jobs to run. Having this ability to automatically scale up and scale down based on demand, will enable you to run workflows in parallel in a cost-efficient manner. (Problem B).</p><h4>Spot Instances</h4><p>Some cloud providers like AWS offer spot instances. Amazon EC2 <a href="https://aws.amazon.com/ec2/spot/"><strong>Spot instances</strong></a> are spare compute capacity in the AWS cloud available to you at steep discounts (upto 90%) compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications. However, the catch is that AWS can take back this instance anytime by giving a notification, two minutes before it actually does that. This is called Spot Interruption. Using Spot Instances reduces cost significantly, but you need a way to manage this spot interruption. (Problem C).</p><p>In this article, the intent is to provide you a solution to run autoscaled self-hosted runners inside a Kubernetes cluster using spot instances. But before that, let us compare the costs incurred in running GitHub-hosted runners and self-hosted runners on AWS EKS.</p><h3>Cost Comparison</h3><p>Let’s say that we need to execute 20 jobs per day and each job takes 1 hour to run. GitHub-hosted runners have the following hardware specifications for a Linux based machine:</p><ul><li>2-core CPU</li><li>7 GB of RAM</li><li>14 GB of SSD space</li></ul><p>We will use a t3.large instance for calculating cost for Self-hosted runner. It has the following specifications:</p><ul><li>2-core CPU</li><li>8 GB of RAM</li></ul><h4>GitHub-hosted Runners</h4><p>Cost of running Linux based GitHub-hosted runner — $0.48/hr</p><p>Total time for running all jobs each day — 20 * 1= 20 hrs / day</p><p>Total cost per month — 0.48 * 20 * 30 = <strong>$288 / month</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iyCIL55WgzpcNrc5NkpKLA.png" /><figcaption>GitHub Pricing Calculator</figcaption></figure><h4>Self-hosted Runners</h4><p>Cost of running 1 EKS cluster — $73/month</p><p>Cost of running 1 t3.large spot instance with 14 GB storage ~ $23.40 / month</p><p>Total cost ~ 73 + 23.40 = <strong>$96.40 / month</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*O4NOr-vzcS69V33cNHYStQ.png" /><figcaption>AWS Pricing Calculator</figcaption></figure><p>Clearly, for this example, self-hosted runners are less expensive as compared to GitHub-hosted runners. GitHub-hosted runners are priced based on the time duration for which the job is actually running. On the other hand, self-hosted runners on AWS EKS, are priced based on the time duration for which the host machine is running. This has the following interesting consequence. If we use self-hosted runner, we might be able to run more number of jobs at the same cost, depending on the job length and job hardware requirements. For example, let’s say a certain job requires 0.5 core CPU and 2 Gi RAM. This means that one t3.large will be able to run at-least three such jobs in parallel, without any extra cost. However, GitHub-hosted runner will charge us for these extra two jobs.</p><h3>Autoscaling</h3><p>Autoscaling the runners based on demand, is important for reducing costs. Autoscaling will work at two levels.</p><h4>Node Level Auto-scaling</h4><p>Nodes will be auto-scaled based on the resource requirements of the pods. This will help us in reducing costs as idle nodes would be automatically scaled down and new nodes will only be created if there is a need for them. It can be implemented using <strong><em>Kubernetes Cluster Autoscaler or Karpenter Open-Source Project.</em></strong></p><h4>Pod Level Auto-scaling</h4><p>Pods will be autoscaled to run on different nodes. If there are more jobs queued by GitHub Actions, more pods will be created to run the jobs. As jobs get complete and queue gets empty, these pods will be scaled down automatically. It is implemented using <strong><em>HorizontalRunnerAutoscaler</em></strong> resource of action-runner-controller.</p><h3>Architecture</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gr1ikgz7LjrKkgP25JRViw.png" /><figcaption>Architecture Diagram</figcaption></figure><p>We will use <a href="https://github.com/actions-runner-controller/actions-runner-controller">actions-runner-controller</a> (ARC) with AWS EKS to solve problems A and B. ARC is an open-source project that operates and manages self-hosted runners for GitHub Actions on your Kubernetes cluster.</p><p>The architecture comprises a single Kubernetes cluster with two namespaces — one for the controller and other for the runners. AWS ECR contains the custom images that will be used to create the runners. Using custom images is useful if your application or jobs need some dependencies to run.</p><p>Also, note that there are 4 types of pods:</p><ul><li>Runner — this pod will actually listen to GitHub for any pending jobs and subsequently run them</li><li>RunnerDeployment — this helps us in managing sets of runners so that we do not have to manage them individually</li><li>HorizontalRunnerAutoscaler — this helps the RunnerDeployment to scale up/down Runner pods. Autoscaling can be driven from a webhook event or pull based metrics, based on how the HorizontalRunnerAutoscaler is configured.</li><li>ActionRunnerController — this is responsible for registering the runner with GitHub and managing other tasks, needed for everything to run smoothly.</li></ul><p>We will also need to store certain secrets in Kubernetes Secrets Manager. By default, it is unencrypted. We will use AWS Key Management Service (KMS) keys to provide envelope encryption of Kubernetes secrets.</p><h3>Handling Spot Interruption</h3><p>AWS only notifies you 2 minutes before it terminates spot instances. This might be not enough for gracefully terminating runners that are running moderately big jobs (that can easily take minutes to complete). Hence, there is not much that you can do to handle the jobs if they get interrupted due to spot interruption. The best that can be done is to create a script which can restart any job that was interrupted due to spot interruption.</p><p>However, if we follow the following practices while setting up the cluster, we can minimize Spot Interruption to a great extent.</p><ul><li>Regularly scale down the nodes where the runner pods are scheduled when there are no jobs running.</li><li>Configure the EKS node group with as many different instance types as we can, preferably from different markets (instance classes, availability-zones)</li><li>Set the node group to use “<a href="https://aws.amazon.com/about-aws/whats-new/2019/08/new-capacity-optimized-allocation-strategy-for-provisioning-amazon-ec2-spot-instances/">capacity-optimized allocation strategy</a>” to ensure that the most-likely-to-survive instance type is picked every time there’s a scale-up. This is enabled by default if we use managed node group to create the cluster.</li></ul><p>Let’s get started with the implementation.</p><h3>STEP 1: Install utilities</h3><p>Visit <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html">this</a> link for detailed instructions for installing the utilities.</p><h4>A — Install and configure <strong>AWS CLI</strong></h4><h4>B — Install <strong>kubectl</strong> Utility</h4><h4>C — Install <strong>eksctl</strong> Utility</h4><h3>STEP 2: Set Up AWS EKS Cluster</h3><h4>A — Create<em> AWS KMS key</em></h4><p>1 — Execute the following to create it</p><p>aws kms create-key --description &quot;ActionRunnerKey&quot; --region us-east-1</p><p>2 — To view the key,</p><p>aws kms list-keys</p><p>Copy the keyARN of the key that you just created.</p><h4>B — Create a cluster of spot instances</h4><p>1 — Copy the following configuration in a file called cluster_config.yaml . Use the keyARN that you copied in the previous step. You can also find it using AWS Console.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e1ddbe73e40e8ce2f62e6132b2408671/href">https://medium.com/media/e1ddbe73e40e8ce2f62e6132b2408671/href</a></iframe><p>2 — Execute the following command to create the cluster</p><p>eksctl create cluster -f <strong><em>cluster_config.yaml</em></strong></p><h3>STEP 3: Set Up Action-Runner-Controller</h3><p>Before you set-up ARC, if you want to autoscale nodes, you can use Kubernetes <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler">Cluster Autoscaler</a> or <a href="https://karpenter.sh/">Karpenter</a>. To know more, you can see <a href="https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html#cluster-autoscaler">Autoscaling in AWS EKS</a>.</p><h4>A — Install <strong>cert-manager.</strong></h4><p>Follow the steps mentioned in this <a href="https://cert-manager.io/docs/installation/kubectl/">link</a> for installing it using Kubectl.</p><h4>B — Install the custom resource definitions and <strong>actions-runner-controller.</strong></h4><p>It can be done using kubectl or helm. This will create <strong>actions-runner-system</strong> namespace in your Kubernetes and deploy the required resources.</p><p>1 — Download the yaml file using the following command</p><p>curl -L -o actions-runner-controller.yaml <a href="https://github.com/actions-runner-controller/actions-runner-controller/releases/download/v0.21.0/actions-runner-controller.yaml">https://github.com/actions-runner-controller/actions-runner-controller/releases/download/v0.21.0/actions-runner-controller.yaml</a></p><p>2— Now, deploy this. With <strong>kubectl</strong>, it can be done with the following command</p><p>kubectl create -f <a href="https://github.com/actions-runner-controller/actions-runner-controller/releases/download/v0.21.0/actions-runner-controller.yaml">actions-runner-controller.yaml</a></p><h4><em>C — Set Up Authentication with GitHub API</em>.</h4><p>You can use PAT based authentication or GitHub App based authentication to authenticate the runners with GitHub. For the purpose of this tutorial, we will use PAT based authentication.</p><p>1 — Log-in to a GitHub account that has admin privileges for the repository, and <a href="https://github.com/settings/tokens/new">create a personal access token</a> with the appropriate scopes listed below:</p><p>Required Scopes for Repository Runners</p><ul><li>repo (Full control)</li></ul><p>Required Scopes for Organization Runners</p><ul><li>repo (Full control)</li><li>admin:org (Full control)</li><li>admin:public_key (read:public_key)</li><li>admin:repo_hook (read:repo_hook)</li><li>admin:org_hook (Full control)</li><li>notifications (Full control)</li><li>workflow (Full control)</li></ul><p>Required Scopes for Enterprise Runners</p><ul><li>admin:enterprise (manage_runners:enterprise)</li></ul><p>For the purpose of this tutorial, we will deploy the self-hosted runner at the repository level.</p><p>2 — Once you have created the appropriate token, deploy it as a secret to your Kubernetes cluster that you are going to deploy the solution on:</p><pre>kubectl create secret generic controller-manager \<br>    -n actions-runner-system \<br>    --from-literal=github_token=${GITHUB_TOKEN}</pre><h4><em>D — Deploy Runners on EKS</em></h4><p>1 — To create the runner in custom namespace first create custom namespace using command: kubectl create namespace action-runner-runners</p><p>2 — To launch an autoscaled self-hosted runner, you need to create a manifest file that includes the RunnerDeployment resource. In this file, we have mentioned ubuntu:latest as the image that will be used to create the runners. However, you can also use a custom image.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3b35cb6fb81f866bb0379c469ed870c6/href">https://medium.com/media/3b35cb6fb81f866bb0379c469ed870c6/href</a></iframe><p>3 — Apply the created manifest file to your Kubernetes in the specified namespace: kubectl --namespace <strong><em>action-runner-runners</em></strong> apply -f runner-deployment.yaml</p><p>4 — Then create another manifest file that includes the HorizontalRunnerAutoscaler resource as follows.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3c926a3cf5fd6aa888de18f6c6e6bb25/href">https://medium.com/media/3c926a3cf5fd6aa888de18f6c6e6bb25/href</a></iframe><p>We are using PercentageRunnersBusy metric to autoscale the pods. However, there are additional metrics, offered by ARC, to autoscale the pods. Visit <a href="https://github.com/actions-runner-controller/actions-runner-controller#autoscaling">this</a> link to know more.</p><p>5 — Apply the created manifest file to your Kubernetes cluster.</p><p>kubectl --namespace <strong><em>action-runner-runners </em></strong>apply -f horizontal-runner-autoscaler.yaml</p><p>6 — The runner you created must now be registered to your repository. To check, open your repository on GitHub. Go to <strong>Settings</strong> -&gt; <strong>Actions</strong> -&gt; <strong>Runners</strong>. It must list a runner with a `self-hosted` tag.</p><p>7 — Configure GitHub actions workflows to use self-hosted runner</p><p>To specify a self-hosted runner for your GitHub Actions job, configure runs-on field in your workflow file to contain the labels that you mentioned in the RunnerDeployment resource file. Note that GitHub attaches the label self-hosted to self-hosted runners by default</p><p>runs-on: [self-hosted, large]</p><p>Now, GitHub Actions should run on your self-hosted runner in AWS EKS.</p><h3>Tear Down</h3><p>You now know how to setup self-hosted runners using AWS EKS.</p><p>To free up all the AWS resources, you can use the following command:</p><p>eksctl delete cluster --name <strong><em>my-cluster </em></strong><em>--</em>region <strong><em>us-east-1</em></strong></p><p>To de-register the runner with your GitHub Repository, open your repository on GitHub. Go to <strong>Settings</strong> -&gt; <strong>Actions</strong> -&gt; <strong>Runners. </strong>You can now delete the runner to de-register it.</p><p>For comments or feedback, you can get in touch with me over <a href="https://www.linkedin.com/in/shivam-agarwal-2015/">LinkedIn</a>.</p><p><em>This was an internship project, mentored by </em><a href="https://www.linkedin.com/in/priya-sebastian-3137352a/"><em>Priya Sebastian</em></a><em>.</em></p><p>If you are interesting in our work and want to solve complex problems in SaaS products, platform &amp; cloud infrastructure engineering — <a href="https://www.chargebee.com/careers/engineering-culture/">we are hiring!</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=342f02ee2320" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/save-cost-by-running-github-actions-on-spot-instances-inside-an-eks-cluster-342f02ee2320">Save cost by running GitHub Actions on Spot Instances inside an autoscaled EKS Cluster</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Solving Engineering Problems using Security Tools — Technical Debt Elimination using CodeQL]]></title>
            <link>https://medium.com/chargebee-engineering/solving-engineering-problems-using-security-tools-technical-debt-elimination-using-codeql-83a1e4649e4b?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/83a1e4649e4b</guid>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[technical-debt]]></category>
            <category><![CDATA[control-flow]]></category>
            <category><![CDATA[codeql]]></category>
            <dc:creator><![CDATA[Abhisek Datta]]></dc:creator>
            <pubDate>Fri, 10 Jun 2022 05:13:23 GMT</pubDate>
            <atom:updated>2022-06-10T05:13:23.869Z</atom:updated>
            <content:encoded><![CDATA[<h3>Eliminating Technical Debt using Control Flow Graph Analysis</h3><p>At Chargebee, we have been using <a href="https://codeql.github.com/">CodeQL</a> for a while to solve security problems related to finding all variants of a given vulnerability. The same approach can however be used to solve an important engineering problem — Technical Debt reduction and dead code elimination.</p><h3>Technical Debt Accumulation</h3><p>Any moderately complex piece of software is an aggregation of code that we write and the libraries that we adopt from open sources in our build. It’s a continuous choice between building from scratch or adopting open source. In any case, evolution of our code base increases the technical debt —</p><ol><li>Deprecated or unused lines of code</li><li>Libraries included in build but no longer used</li><li>Dependency on legacy or unmaintained libraries</li></ol><p>Add to the complexity if you are using a legacy build system that uses unmanaged jars sprayed across your Git repositories.</p><h4>Challenges in Dead Code Elimination</h4><p>Build tools like <a href="https://docs.gradle.org/current/userguide/viewing_debugging_dependencies.html">Gradle</a> or Maven provide out of box support for identifying dependencies. However, older ant based build systems cannot use such feature readily. Even so, modern build tools will not be capable of detecting unused code blocks or dependencies, especially in case of transitive dependencies.</p><h3>Using CodeQL to Identify Unused External Libraries</h3><blockquote>CodeQL is the code analysis platform used by security researchers to automate variant analysis.</blockquote><p>We are looking at adopting CodeQL for identifying different variants of a vulnerability, found internally or reported by external security vendors. As part of evaluation, we wrote context specific CodeQL classes modelling our controller class (Java) that can be used to write queries for common vulnerabilities.</p><p>We internally ran an experiment to leverage CodeQL to identify unused libraries in a sample application. The general idea is given below</p><ol><li>Build a CodeQL database for a sample application — This represents a Control Flow Graph (CFG) for us to query upon</li><li>Write a CodeQL query to identify all cross package <a href="https://help.semmle.com/QL/learn-ql/java/introduce-libraries-java.html">MethodCall</a> i.e. <em>caller</em> is defined in com.example.sampleApp and <em>callee</em> is NOT in the same package. To reduce false positive, we filter out the java.* package as well.</li><li>Create a sorted list of all <a href="https://maven.apache.org/guides/mini/guide-naming-conventions.html">GAV</a> based on existing <a href="https://docs.oracle.com/javase/tutorial/deployment/jar/manifestindex.html">jar manifests</a></li><li>Any library (jar) for which we do not have at least one cross package <em>Method Call</em> is potentially unused and can be removed.</li></ol><p>An example control flow graph (CFG) is given below that visualises the idea where we need to capture <em>method calls</em> across packages.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*muYfk7ghOPhXE_IJ" /><figcaption>An example control flow graph demonstrating cross package method call</figcaption></figure><p>An example CodeQL query for [2] would look like this</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/2077a9964c8def1ac478797d0eb3f5e1/href">https://medium.com/media/2077a9964c8def1ac478797d0eb3f5e1/href</a></iframe><p>This approach can also be used to identify unused block of code, including class, method or a package with minimal customisation of the above query.</p><h4>Challenges and Constraints</h4><p>The approach presented in this post works in general cases but fails to handle dynamically resolved or transitive dependencies. For example, consider <em>external-lib-1</em> is dependent on <em>external-lib-2</em>. Our approach above will not consider this case. We did not attempt to solve this problem as we believe an application should only manage its immediate dependencies and let the build tool take care of transitive dependencies. Controlling external dependencies, including transitive dependencies, for security or quality gate requirements can be implemented using private repository manager and not really within the scope of this problem.</p><p>For comments or feedback, you can get in touch with me over <a href="https://twitter.com/abh1sek">Twitter</a> 😀</p><p>If you are interesting in our work and want to solve complex problems in SaaS products, platform &amp; cloud infrastructure engineering — w<a href="https://www.chargebee.com/careers/engineering-culture/">e are hiring!</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=83a1e4649e4b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/solving-engineering-problems-using-security-tools-technical-debt-elimination-using-codeql-83a1e4649e4b">Solving Engineering Problems using Security Tools — Technical Debt Elimination using CodeQL</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Making API mocking easy with Mirage!]]></title>
            <link>https://medium.com/chargebee-engineering/making-api-mocking-easy-with-mirage-929cf32ca7ac?source=rss----3e2581cef21d---4</link>
            <guid isPermaLink="false">https://medium.com/p/929cf32ca7ac</guid>
            <category><![CDATA[frontend]]></category>
            <category><![CDATA[developer-experience]]></category>
            <category><![CDATA[unit-testing]]></category>
            <category><![CDATA[development]]></category>
            <category><![CDATA[testing]]></category>
            <dc:creator><![CDATA[AswathPrabhu R]]></dc:creator>
            <pubDate>Tue, 31 May 2022 04:24:48 GMT</pubDate>
            <atom:updated>2022-05-31T04:51:00.568Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="Build your entire UI against a mock backend API" src="https://cdn-images-1.medium.com/max/1024/0*INWFfUvjlpsVsFJl.png" /></figure><p>We use <a href="https://miragejs.com/"><em>Mirage</em></a> at <em>Chargebee</em> and it is the <strong>one deed that we did and in return it has filled two of our needs</strong> 😇. Our experience with Mirage is quite exciting and we can’t wait to share it here! Let’s go already!</p><p><strong><em>What is Mirage? </em>🤔</strong></p><p>Mirage helps frontend developers to configure a <strong>mock server</strong> for the backend APIs and the server is centralised for both development and the test suite.</p><p>It’s just not a plain old API mocking tool, but has all the batteries included that helps to build complete frontend features even if our APIs doesn’t exist. Let’s say you wanna build up a product prototype rapidly fast with minimal overhead, that’s completely possible with Mirage.</p><p><strong><em>Life before Mirage at Chargebee </em>😅</strong></p><p>Before implementing Mirage, we had quite an inflexible approach where for testing, <strong>we were copy-pasting chunks of static fixtures</strong> as return values for <a href="https://jestjs.io/docs/mock-functions"><em>jest mock functions</em></a> here and there and testing over them.</p><p>This approach was just covering a set of cases (mostly covering the happy path of the users) for us and handling dynamic scenarios was a headache where we were forced to do a lot of work.</p><p>For the development of the features without the backend dependency, we were using axios interceptors for mocking the APIs. We were just routing the network calls made in the application to a local node server that handled the endpoints.</p><p>This obviously was not living alongside the rest of the application and forced us to switch contexts. Also here again we were tracing the happy path mostly and were not dealing with the <strong>network states and latency</strong>.</p><p><strong><em>What makes Mirage a better approach? </em>😎</strong></p><p>Mirage runs alongside the rest of the application, no new server processes or terminal windows are needed. It has an <strong>in-memory database</strong> that ensures referential integrity between different data models. Additionally, there are concepts like,</p><blockquote><strong>Factory</strong></blockquote><blockquote>Factory layer helps to organize the<strong> data-creation logic</strong>. Mock server can be quickly put into a different state during development and as well during testing. During development, the in-memory database can be seeded via factories on seeds hook to load some initial data.</blockquote><blockquote><strong>Serializer</strong></blockquote><blockquote>Serializing layer hooks into the state where route handlers process data models to return a response. <strong>It is responsible for the format of the data returned</strong>. There are some <a href="https://miragejs.com/docs/main-concepts/serializers/#choosing-which-serializer-to-use">built-in serializers</a> included and also an option to customize the default behavior.</blockquote><p><strong><em>Woo-hoo! Mock server can be centralized </em>😇</strong></p><p>This is one of the very useful features where the mock server can be shared between the actual development workflow and the test suite. With factories, various dynamic scenarios of a component can be tested.</p><p><strong><em>Enough of theory? Let’s get practical! </em>🥳</strong></p><p>This is the sample application we’ll be using to get a gist of Mirage’s capabilities. It displays random quotes from <a href="https://gameofthrones.fandom.com/wiki/Game_of_Thrones"><strong><em>Game of Thrones</em></strong></a> on every load. It also includes a route with a dynamic segment that denotes the number of quotes requested (for ex: `/10` would query 10 quotes via the API and render the results)</p><p><strong>The application is hosted </strong><a href="https://gotquotes.vercel.app/#/3"><strong>here</strong></a><strong>.</strong></p><p>In the below playground, you can find the main.js where we have the application’s routes config. We are also spinning up the Mirage server for development environment. Now, Mirage can intercept all the network requests triggered from the application and respond with mock response.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fstackblitz.com%2Fedit%2Fgot-quotes-sample%3Fembed%3D1%26file%3Dsrc%252Fmain.js%26view%3Deditor&amp;display_name=StackBlitz&amp;url=https%3A%2F%2Fstackblitz.com%2Fedit%2Fgot-quotes-sample%3Ffile%3Dsrc%252Fmain.js%26view%3Deditor&amp;image=https%3A%2F%2Fc.staticblitz.com%2Fassets%2Fsocial_editor-1e53e71ce7e2963fcef1b44837c4570c3fb10c8cb64b194a6eafa4e145d10edc.png&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=stackblitz" width="745" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/486b1f6f4207d088de2627285134934d/href">https://medium.com/media/486b1f6f4207d088de2627285134934d/href</a></iframe><p>The server config of the Mirage can be found here,</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fstackblitz.com%2Fedit%2Fgot-quotes-sample%3Fembed%3D1%26file%3Dsrc%252Fmirage%252Fserver.js%26view%3Deditor&amp;display_name=StackBlitz&amp;url=https%3A%2F%2Fstackblitz.com%2Fedit%2Fgot-quotes-sample%3Ffile%3Dsrc%252Fmirage%252Fserver.js%26view%3Deditor&amp;image=https%3A%2F%2Fc.staticblitz.com%2Fassets%2Fsocial_editor-1e53e71ce7e2963fcef1b44837c4570c3fb10c8cb64b194a6eafa4e145d10edc.png&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=stackblitz" width="745" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/301c39d41ea8246ec67563e64f3921c3/href">https://medium.com/media/301c39d41ea8246ec67563e64f3921c3/href</a></iframe><p>We’ve set up the quotes factory. This can come in quite handy to test various dynamic scenarios which we will see later. You can find the routes intercepted under routes method.</p><p>If we are spinning up Mirage server on environments like development , usually we will be passing through(instead of mocking) all the APIs to the actual server. Mirage exposes <a href="https://miragejs.com/api/classes/server/#passthrough"><strong><em>passthrough</em></strong></a> method on the server instance for this exact functionality.</p><p>We can also avoid passing through during active development, for ex: while the API is not yet ready. We can just construct our factories according to the spec of the API and let the mirage server intercept all the requests by enabling routes on development environment too.</p><p>Once everything is fine and the API is ready, we could just disable routes on the development environment and pass through them, Now the requests will be passing through the <strong>network layers hitting the actual servers</strong>. Things should just work!</p><p>There shouldn’t be any rework required if we’ve setup our Mirage models exactly mirroring the actual backend. Definitely some can have questions like,</p><blockquote>Is this all worth it?</blockquote><blockquote>Are we over engineering doing UI without backend?</blockquote><blockquote>Instead can we have mock data just as data properties in UI components?</blockquote><p>The real power comes when the Mirage mock server is also shared by the test environment. Just have a look at this component test file in the below playground,</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fstackblitz.com%2Fedit%2Fgot-quotes-sample%3Fembed%3D1%26file%3Dsrc%252Fcomponents%252F__tests__%252FCardsContainer.spec.js%26view%3Deditor&amp;display_name=StackBlitz&amp;url=https%3A%2F%2Fstackblitz.com%2Fedit%2Fgot-quotes-sample%3Ffile%3Dsrc%252Fcomponents%252F__tests__%252FCardsContainer.spec.js%26view%3Deditor&amp;image=https%3A%2F%2Fc.staticblitz.com%2Fassets%2Fsocial_editor-1e53e71ce7e2963fcef1b44837c4570c3fb10c8cb64b194a6eafa4e145d10edc.png&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=stackblitz" width="745" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/5001bf8c636359373d11b98ae3296c44/href">https://medium.com/media/5001bf8c636359373d11b98ae3296c44/href</a></iframe><p>The above file handles unit tests for the card component that renders the card view. See how quickly we are able to put the API response into different states and scenarios <strong>without copy pasting random mocks here and there</strong>! This is where the factories come into play.</p><p>To accomplish this, you can see that we’ve done nothing complex. We’ve just setup our quote factory to create required mock dynamically and added a route to intercept in a express style syntax! That’s it! Mirage also has batteries included to handle complex data relationships.</p><p>This kinda flexibility can’t be just achieved using <strong>data properties</strong> on UI components that hold mock data.</p><p>Some cons of using just data props for mocking can be,</p><ul><li>Integrity with backend can’t be ensured</li><li>Data fetching, various network states and persistence can be very complex to handle</li><li>There is no straightforward way to dynamically generate responses.</li><li>It will only include user’s happy path.</li></ul><p>On the other hand Mirage ticks all these boxes for us and we are efficiently able to handle all our cases here!</p><p><strong><em>Do we have some alternatives? </em></strong>🤤</p><p>Yes. <a href="https://mswjs.io/">MSW</a> is a similar tool. It uses service worker under the hood to intercept network requests whereas Mirage just mocks out XMLHttpRequests and fetch API.</p><p>This is an advantage of MSW over Mirage as the requests are shown in the Network tab of the DevTools — they’re real HTTP requests from the perspective of the browser. This gives a great DX while doing actual development.</p><p>MSW’s mock server can also be used in test environment. The route handler gives a request and lets us write a response (interceptor functionality) but nothing more than that.</p><p>Mirage on the other hand provides a similar express style route handler API and also brings along an <strong>in-memory DB, support for relationships and factories </strong>that make creating different data scenarios easy.</p><p>We stuck with Mirage here as it came with batteries included for mocking in a way that faithfully reproduces the behavior of the production API that in turn helped us to test various dynamic scenarios and not just the happy path!</p><p>So that’s it. This is how beneficial Mirage is for us and we can’t wait to hear your experience with it in the comments. Also, shoot up anything you’ve, let’s keep the discussion on! 👋</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=929cf32ca7ac" width="1" height="1" alt=""><hr><p><a href="https://medium.com/chargebee-engineering/making-api-mocking-easy-with-mirage-929cf32ca7ac">Making API mocking easy with Mirage!</a> was originally published in <a href="https://medium.com/chargebee-engineering">Engineering @ Chargebee</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>