<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Quadcode - Medium]]></title>
        <description><![CDATA[Quadcode is an international IT company. We develop trading SaaS platform, banking and internal solutions. Over 50 million traders in 150+ countries appreciate the benefits of our platform. - Medium]]></description>
        <link>https://medium.com/quadcode-life?source=rss----526934940ae0---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 14 May 2026 19:43:04 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/quadcode-life" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Fully typed forms based on API schema with react-hook-form,  openapi-typescript and yup validation]]></title>
            <link>https://medium.com/quadcode-life/fully-typed-forms-based-on-api-schema-with-react-hook-form-openapi-typescript-and-yup-validation-93ba1321368b?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/93ba1321368b</guid>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[openapi-typescript]]></category>
            <category><![CDATA[react-hook-form]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[yup]]></category>
            <dc:creator><![CDATA[Dmitrii Pashkevich]]></dc:creator>
            <pubDate>Wed, 12 Feb 2025 12:25:56 GMT</pubDate>
            <atom:updated>2025-02-12T12:25:55.870Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="Fully typed forms based on API schema with react-hook-form, openapi-typescript and yup validation" src="https://cdn-images-1.medium.com/max/720/1*npbP6EhpqqlTEMrpMdOxjg.png" /></figure><h3>Fully typed forms based on API schema with react-hook-form, openapi-typescript and yup validation</h3><p>Hello everyone, my name is Dmitrii Pashkevich and I’m a Frontend Developer at Quadcode. This article will explain to you the approach to creating fully typed forms in React applications using <a href="https://www.react-hook-form.com/">react-hook-form</a>, <a href="https://openapi-ts.dev/cli">openapi-typescript</a> and the <a href="https://github.com/jquense/yup">yup</a> validation. This method relies on an API schema to ensure that the form data exactly matches with the types from the backend API schema, simplifying development and reducing errors in the process. Whether you are a Frontend or a Full-stack Engineer looking to improve form reliability, I hope you find this material useful. Enjoy reading it!</p><h3>About</h3><p>Let’s consider an example of creating the form with POST request and a basic entity “Offer” consisting of 4 fields:</p><ul><li><em>name</em> — full name of the “Offer”. Field: string; mandatory; maximum length 1024 characters;</li><li><em>short_name</em> — short name of the “Offer”. Field: string; mandatory; maximum length 1024 characters;</li><li><em>advertiser_id</em> — advertiser id to which the “Offer” belongs. Field: number; mandatory for filling;</li><li><em>model</em> — alias of the model on which the “Offer” works. Field: string; mandatory; maximum length 1024 characters.</li></ul><h3>Project preparation</h3><p>Let’s create a basic React project using the generator from Vite with the directory name <em>fully-typed-form</em>.</p><pre>npm create vite@latest</pre><p>The command will result in a directory containing a simple React application. To keep things simple, let’s assume that the server API will be described as a yaml file containing a description of the Open API schema, which allows you to send a request to create an “Offer” entity.</p><p>Let’s start a file describing the Open API schema — <em>fully-typed-form/server/offers.openapi.yaml</em>.</p><pre>openapi: 3.1.0<br>info:<br>  title: Example API for article<br>  description: Article &quot;Fully typed forms based on API schema with react-hook-form,  openapi-typescript and yup validation&quot;<br>  version: 1.0.0<br>servers:<br>  - url: &#39;https&#39;<br>paths:<br>  /api/offers:<br>    post:<br>      summary: Create offer<br>      requestBody:<br>        required: true<br>        content:<br>          application/json:<br>            schema:<br>              $ref: &#39;#/components/schemas/OfferPostBody&#39;<br>      responses:<br>        &#39;201&#39;:<br>          description: Created<br>          content:<br>            application/json:<br>              schema:<br>                $ref: &#39;#/components/schemas/OfferId&#39;<br>components:<br>  schemas:<br>    OfferPostBody:<br>      required:<br>        - name<br>        - short_name<br>        - advertiser_id<br>        - model<br>      properties:<br>        name:<br>          description: Full offer name<br>          type: string<br>          maxLength: 1024<br>          x-oapi-codegen-extra-tags:<br>            validate: &quot;required,notBlank,lte=1024&quot;<br>        short_name:<br>          description: Short offer name<br>          type: string<br>          maxLength: 1024<br>          x-oapi-codegen-extra-tags:<br>            validate: &quot;required,notBlank,lte=1024&quot;<br>        advertiser_id:<br>          description: Id of the advertiser<br>          type: integer<br>          x-oapi-codegen-extra-tags:<br>            validate: &quot;required&quot;<br>        model:<br>          description: Model alias<br>          type: string<br>          x-oapi-codegen-extra-tags:<br>            validate: &quot;required,notBlank,lte=1024&quot;<br><br>    OfferId:<br>      type: object<br>      properties:<br>        id:<br>          type: string<br>          description: Id of the offer</pre><p>In order to generate an API-based schema, let’s use the openapi-typescript library, which is a tool designed to generate TypeScript types based on OpenAPI specifications. It helps to automatically generate data types to work with the API, which greatly simplifies the integration process and increases code reliability.</p><p>Let’s execute the type generation command for the above API schema.</p><pre>npx openapi-typescript server/offers.openapi.yaml -o src/shared/types/api/offers.types.generated.ts --make-paths-enum --root-types-no-schema-prefix --root-types --path-params-as-types --alphabetize</pre><p>The output is a generated file <em>src/shared/types/api/offers.types.generated.ts</em> containing types corresponding to the scheme described above.</p><pre>/**<br>* This file was auto-generated by openapi-typescript.<br>* Do not make direct changes to the file.<br>*/<br><br>export interface paths {<br>   &quot;/api/offers&quot;: {<br>       parameters: {<br>           query?: never;<br>           header?: never;<br>           path?: never;<br>           cookie?: never;<br>       };<br>       get?: never;<br>       put?: never;<br>       /** Create offer */<br>       post: {<br>           parameters: {<br>               query?: never;<br>               header?: never;<br>               path?: never;<br>               cookie?: never;<br>           };<br>           requestBody: {<br>               content: {<br>                   &quot;application/json&quot;: components[&quot;schemas&quot;][&quot;OfferPostBody&quot;];<br>               };<br>           };<br>           responses: {<br>               /** @description Created */<br>               201: {<br>                   headers: {<br>                       [name: string]: unknown;<br>                   };<br>                   content: {<br>                       &quot;application/json&quot;: components[&quot;schemas&quot;][&quot;OfferId&quot;];<br>                   };<br>               };<br>           };<br>       };<br>       delete?: never;<br>       options?: never;<br>       head?: never;<br>       patch?: never;<br>       trace?: never;<br>   };<br>}<br>export type webhooks = Record&lt;string, never&gt;;<br>export interface components {<br>   schemas: {<br>       OfferId: {<br>           /** @description Id of the offer */<br>           id?: string;<br>       };<br>       OfferPostBody: {<br>           /** @description Id of the advertiser */<br>           advertiser_id: number;<br>           /** @description Model alias */<br>           model: string;<br>           /** @description Full offer name */<br>           name: string;<br>           /** @description Short offer name */<br>           short_name: string;<br>       };<br>   };<br>   responses: never;<br>   parameters: never;<br>   requestBodies: never;<br>   headers: never;<br>   pathItems: never;<br>}<br>export type OfferId = components[&#39;schemas&#39;][&#39;OfferId&#39;];<br>export type OfferPostBody = components[&#39;schemas&#39;][&#39;OfferPostBody&#39;];<br>export type $defs = Record&lt;string, never&gt;;<br>export type operations = Record&lt;string, never&gt;;<br>export enum ApiPaths {<br>   PostApiOffers = &quot;/api/offers&quot;<br>}</pre><p>The example project will be described using <a href="https://feature-sliced.design/">FSD</a> (Feature-Sliced Design), so if you don’t want to read about prep work, auxiliary utilities, types and components, you can go straight to the Widgets layer (OfferFormCreate) and skip the description of sections related to shared and features layer.</p><p>Next, to implement the form we need to install the following modules.</p><pre>npm install react-hook-form @hookform/resolvers yup</pre><p>We’ll also install and connect Mantine UI components so to avoid writing unnecessary styles.</p><pre>npm install @mantine/core @mantine/hooks<br>npm install --save-dev postcss postcss-preset-mantine postcss-simple-vars</pre><p>Also install and connect <a href="https://www.npmjs.com/package/@tanstack/react-query">Tanstack React Query</a> to work with API requests.</p><pre>npm install @tanstack/react-query</pre><p>The configuration and connection of Mantine, Tanstack React Query can be found in the file: <em>src/App.tsx</em>.</p><h3>Shared layer</h3><p>Located in <em>src/shared</em> and will contain helper utilities, types, API requests, etc.</p><h4>API requests</h4><p>The <em>src/shared/api/offers.api.ts</em> contains an emulation of the request to create an offerer.</p><pre>import { OfferPostBody } from &#39;../types/api/offers.types.generated.ts&#39;<br><br>export const apiOfferPost = async (values: OfferPostBody) =&gt; {<br> await new Promise((resolve) =&gt; setTimeout(resolve, 1000))<br><br> console.info(values)<br>}</pre><h4>CreateEnumObject utility</h4><p>To simplify the work with typing and description of form fields, let’s create an auxiliary utility to turn Union type into an object: <em>src/shared/types/utils/createEnumObject.ts</em>.</p><pre>export const createEnumObject = &lt;T extends string&gt;(o: { [P in T]: P }) =&gt; {   return o }</pre><p>This utility will be useful for following the uniform naming of form fields among the project files.</p><h4>FormSelect component</h4><p>This is an auxiliary wrapper component over <a href="https://mantine.dev/core/select/">Mantine Select</a> that will allow this component to be connected to a react-hook-form by using <a href="https://react-hook-form.com/docs/usecontroller">useController</a>. This is necessary because it is not possible to use <a href="https://react-hook-form.com/docs/useform/register">form.register</a> to control a field because the type for onChange (a component parameter) is different.</p><pre>// react-hook-form onChange<br>export type ChangeHandler = (event: {<br>   target: any;<br>   type?: any;<br>}) =&gt; Promise&lt;void | boolean&gt;;<br><br>// VS<br><br>// Mantine Select onChange<br>onChange?: (value: string | null, option: ComboboxItem) =&gt; void;</pre><p>The full code of the component can be found in the file: <em>src/shared/components/FormSelect</em>.</p><pre>import { type FieldValues, useController } from &#39;react-hook-form&#39;<br><br>import { Select as MantineSelect } from &#39;@mantine/core&#39;<br><br>import { FormSelectPropertiesType } from &#39;./FormSelect.types.ts&#39;<br><br>export const FormSelect = &lt;T extends FieldValues&gt;({<br> name,<br> control,<br> defaultValue,<br> onChange,<br> rules,<br> shouldUnregister,<br> ...properties<br>}: FormSelectPropertiesType&lt;T&gt;) =&gt; {<br> const {<br>   field: { onChange: fieldOnChange, value, ...field },<br>   fieldState,<br> } = useController&lt;T&gt;({<br>   name,<br>   control,<br>   defaultValue,<br>   rules,<br>   shouldUnregister,<br> })<br><br> return (<br>   &lt;MantineSelect<br>     onChange={(value, option) =&gt; {<br>       fieldOnChange(value)<br>       onChange?.(value, option)<br>     }}<br>     error={fieldState.error?.message}<br>     value={value}<br>     {...field}<br>     {...properties}<br>   /&gt;<br> )<br>}</pre><h3>Features layer</h3><p>Because the form will contain interactive elements for working with advertisers and models, let’s place them in the features layer.</p><h4>AdvertiserSelect component</h4><p>The full code can be found in the file: <em>src/features/AdvertiserSelect</em>.</p><pre>import type { AdvertiserSelectPropertiesInterface } from &#39;./AdvertiserSelect.types.ts&#39;<br><br>import { FormSelect } from &#39;../../shared/components/FormSelect/FormSelect.tsx&#39;<br>import { ADVERTISERS_LIST } from &#39;./base/constant.ts&#39;<br><br>export const AdvertiserSelect = ({ name, form, ...rest }: AdvertiserSelectPropertiesInterface) =&gt; (<br> &lt;FormSelect name={name} control={form.control} data={ADVERTISERS_LIST} required searchable {...rest} /&gt;<br>)</pre><h4>ModelSelect component</h4><p>The full code can be found in the file: <em>src/features/ModelSelect</em>.</p><pre>import type { ModelSelectPropertiesInterface } from &#39;./ModelSelect.types.ts&#39;<br><br>import { FormSelect } from &#39;../../shared/components/FormSelect/FormSelect.tsx&#39;<br>import { MODELS_LIST } from &#39;./base/constant.ts&#39;<br><br>export const ModelSelect = ({ name, form, ...rest }: ModelSelectPropertiesInterface) =&gt; (<br> &lt;FormSelect name={name} control={form.control} data={MODELS_LIST} required searchable {...rest} /&gt;<br>)</pre><h3>Widgets layer (OfferFormCreate)</h3><p>Next, let’s move on to the main part of the article — organizing a typed form using react-hook-form and validation via yup.</p><p>On this layer, let’s create a directory that will contain the form widget for creating an offerer: <em>src/widgets/offers/OfferFormCreate</em>.</p><p>The form widget will consist of:</p><ul><li>description of field constants based on the type obtained from the API;</li><li>description of labels and placeholders objects for form fields;</li><li>description of validation scheme;</li><li>a hook for controlling the form;</li><li>form component.</li></ul><h4>Field object and validation scheme</h4><p>In order to describe the typed scheme for our form, let’s create a file: <em>src/widgets/offers/OfferFormCreate/base/constants.ts</em>.</p><p>The following code comments describe the what and why.</p><pre>import * as yup from &#39;yup&#39;<br><br>import { OfferPostBody } from &#39;../../../../shared/types/api/offers.types.generated.ts&#39;<br>import { createEnumObject } from &#39;../../../../shared/types/utils/createEnumObject.ts&#39;<br><br><br>// Constants for yup schema validation<br>const OFFER_NAME_MAX_LENGTH = 1024<br>const OFFER_SHORT_NAME_MAX_LENGTH = 1024<br><br><br>// This object contains all fields from OfferPostBody type keys.<br>// This object created by createEnumObject<br>// After that action we can use OFFER_CREATE_FORM_FIELDS in all places where we need to use filed names<br>export const OFFER_CREATE_FORM_FIELDS = createEnumObject&lt;keyof OfferPostBody&gt;({<br> advertiser_id: &#39;advertiser_id&#39;,<br> name: &#39;name&#39;,<br> short_name: &#39;short_name&#39;,<br> model: &#39;model&#39;,<br>})<br><br>// Labels mapped to form fields<br>export const OFFER_CREATE_FORM_FIELDS_LABEL: Record&lt;keyof typeof OFFER_CREATE_FORM_FIELDS, string&gt; = {<br> [OFFER_CREATE_FORM_FIELDS.advertiser_id]: &#39;Advertiser&#39;,<br> [OFFER_CREATE_FORM_FIELDS.name]: &#39;Name&#39;,<br> [OFFER_CREATE_FORM_FIELDS.short_name]: &#39;Short name&#39;,<br> [OFFER_CREATE_FORM_FIELDS.model]: &#39;Model&#39;,<br>}<br><br>// Placeholders mapped to form fields<br>export const OFFER_CREATE_FORM_FIELDS_PLACEHOLDER: Record&lt;keyof typeof OFFER_CREATE_FORM_FIELDS, string&gt; = {<br> [OFFER_CREATE_FORM_FIELDS.advertiser_id]: &#39;Select advertiser&#39;,<br> [OFFER_CREATE_FORM_FIELDS.name]: &#39;Type name&#39;,<br> [OFFER_CREATE_FORM_FIELDS.short_name]: &#39;Type short name&#39;,<br> [OFFER_CREATE_FORM_FIELDS.model]: &#39;Select model&#39;,<br>}<br><br>// This is a yup schema. It describes validation options for each field in OfferPostBody<br>export const OFFER_CREATE_FORM_VALIDATION_SCHEMA: yup.ObjectSchema&lt;OfferPostBody&gt; = yup.object({<br> [OFFER_CREATE_FORM_FIELDS.advertiser_id]: yup<br>   .number()<br>   .required()<br>   .nonNullable()<br>   .label(OFFER_CREATE_FORM_FIELDS_LABEL[OFFER_CREATE_FORM_FIELDS.advertiser_id]),<br> [OFFER_CREATE_FORM_FIELDS.name]: yup<br>   .string()<br>   .required()<br>   .nonNullable()<br>   .max(OFFER_NAME_MAX_LENGTH)<br>   .label(OFFER_CREATE_FORM_FIELDS_LABEL[OFFER_CREATE_FORM_FIELDS.name]),<br> [OFFER_CREATE_FORM_FIELDS.short_name]: yup<br>   .string()<br>   .required()<br>   .nonNullable()<br>   .max(OFFER_SHORT_NAME_MAX_LENGTH)<br>   .label(OFFER_CREATE_FORM_FIELDS_LABEL[OFFER_CREATE_FORM_FIELDS.short_name]),<br> [OFFER_CREATE_FORM_FIELDS.model]: yup.string().required().nonNullable().label(OFFER_CREATE_FORM_FIELDS_LABEL[OFFER_CREATE_FORM_FIELDS.model]),<br>})</pre><h4>Hook for working with the form</h4><p>Next, create a form via react-hook-form and connect the validation scheme described above to it. In order to isolate the work with our form we create a hook: <em>src/widgets/offers/OfferFormCreate/base/hooks/useOfferFormCreate.ts</em>.</p><p>Further in the comments to the code, the what and why are described.</p><pre>import { useMutation } from &#39;@tanstack/react-query&#39;<br>import { useForm } from &#39;react-hook-form&#39;<br><br>import { yupResolver } from &#39;@hookform/resolvers/yup&#39;<br>import * as yup from &#39;yup&#39;<br><br>import { apiOfferPost } from &#39;../../../../../shared/api/offers.api.ts&#39;<br>import { OFFER_CREATE_FORM_FIELDS, OFFER_CREATE_FORM_VALIDATION_SCHEMA } from &#39;../constants.ts&#39;<br><br>// yup.InferType is a utility from the Yup library used to extract a TypeScript type from a validation schema.<br>// It allows you to automatically generate data types based on a Yup schema, helping to avoid code duplication and mismatches between types.<br>type OfferFormCreateSchemaType = yup.InferType&lt;typeof OFFER_CREATE_FORM_VALIDATION_SCHEMA&gt;<br><br>// This hook is used to create a form for creating a new offer<br>export const useOfferFormCreate = () =&gt; {<br>  // It&#39;s a mutation hook from the @tanstack/react-query library. It&#39;s handle API request to create new offer<br>  const { isPending, mutate } = useMutation({<br>    mutationFn: apiOfferPost,<br>  })<br><br>  // This hook is used to create a form for creating a new offer and handle fields<br>  // This hook is typed with OfferFormCreateSchemaType.<br>  // This hook used a validation schema to validate fields (resolver).<br>  const form = useForm&lt;OfferFormCreateSchemaType&gt;({<br>    resolver: yupResolver(OFFER_CREATE_FORM_VALIDATION_SCHEMA),<br>  })<br><br>  // This function is used to handle form submit<br>  const handleSubmit = (values: OfferFormCreateSchemaType) =&gt; {<br>    mutate(values)<br>  }<br><br>  // This function is used to get error message for each field<br>  const getFieldErrorMessage = (fieldName: keyof typeof OFFER_CREATE_FORM_FIELDS) =&gt; form.formState.errors[fieldName]?.message?.toString()<br><br>  return {<br>    form,<br>    getFieldErrorMessage,<br>    handleSubmit,<br>    isPending, // This property is used to check if the API request is pending<br>  }<br>}</pre><h4>Form</h4><p>Now all that is left is to implement the form description using the above hook: <em>useOfferFormCreate</em>. Further in the comments to the code, it is described what and why.</p><pre>import { Button, Stack, TextInput } from &#39;@mantine/core&#39;<br><br>import { AdvertiserSelect } from &#39;../../../features/AdvertiserSelect&#39;<br>import { ModelSelect } from &#39;../../../features/ModelSelect&#39;<br>import { OFFER_CREATE_FORM_FIELDS, OFFER_CREATE_FORM_FIELDS_LABEL, OFFER_CREATE_FORM_FIELDS_PLACEHOLDER } from &#39;./base/constants.ts&#39;<br>import { useOfferFormCreate } from &#39;./base/hooks/useOfferFormCreate.ts&#39;<br><br>// This component is used to create a form for creating a new offer<br>export const OfferFormCreate = () =&gt; {<br> // This hook is used to get form instance and other helpers<br> const { form, getFieldErrorMessage, handleSubmit, isPending } = useOfferFormCreate()<br><br> return (<br>   &lt;form noValidate onSubmit={form.handleSubmit(handleSubmit)}&gt;<br>     &lt;Stack&gt;<br>       {/* Offer name */}<br>       &lt;TextInput<br>         label={OFFER_CREATE_FORM_FIELDS_LABEL.name}<br>         error={getFieldErrorMessage(OFFER_CREATE_FORM_FIELDS.name)}<br>         placeholder={OFFER_CREATE_FORM_FIELDS_PLACEHOLDER.name}<br>         required<br>         {...form.register(OFFER_CREATE_FORM_FIELDS.name)}<br>       /&gt;<br><br>       {/* Offer short name */}<br>       &lt;TextInput<br>         label={OFFER_CREATE_FORM_FIELDS_LABEL.short_name}<br>         error={getFieldErrorMessage(OFFER_CREATE_FORM_FIELDS.short_name)}<br>         placeholder={OFFER_CREATE_FORM_FIELDS_PLACEHOLDER.short_name}<br>         required<br>         {...form.register(OFFER_CREATE_FORM_FIELDS.short_name)}<br>       /&gt;<br><br>       {/* Offer advertiser */}<br>       &lt;AdvertiserSelect<br>         label={OFFER_CREATE_FORM_FIELDS_LABEL.advertiser_id}<br>         name={OFFER_CREATE_FORM_FIELDS.advertiser_id}<br>         error={getFieldErrorMessage(OFFER_CREATE_FORM_FIELDS.advertiser_id)}<br>         form={form}<br>         placeholder={OFFER_CREATE_FORM_FIELDS_PLACEHOLDER.advertiser_id}<br>       /&gt;<br><br>       {/* Offer model */}<br>       &lt;ModelSelect<br>         label={OFFER_CREATE_FORM_FIELDS_LABEL.model}<br>         name={OFFER_CREATE_FORM_FIELDS.model}<br>         error={getFieldErrorMessage(OFFER_CREATE_FORM_FIELDS.model)}<br>         form={form}<br>         placeholder={OFFER_CREATE_FORM_FIELDS_PLACEHOLDER.model}<br>       /&gt;<br><br>       {/* Submit button with loading state */}<br>       &lt;Button loading={isPending} type=&quot;submit&quot;&gt;<br>         Submit<br>       &lt;/Button&gt;<br>     &lt;/Stack&gt;<br>   &lt;/form&gt;<br> )<br>}</pre><h3>Overall</h3><p>Thus, thanks to the fact that the OFFER_CREATE_FORM_FIELDS field object and the OFFER_CREATE_FORM_VALIDATION_SCHEMA validation scheme are tied to the OfferPostBody type, we have a form that is quite easy to maintain, due to the fact that:</p><ul><li>uniform field naming is used. It is easy to refactor through IDE tools to access fields;</li><li>a single binding to the OfferPostBody type is used. If this type is changed (scheme regeneration), we will get full illumination of places in the code that do not correspond to the new API scheme;</li><li>strict typing of form field values and data sending function.</li></ul><p>You can find the full code on GitHub here: <a href="https://github.com/dipiash/fully-typed-form">fully-typed-form</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=93ba1321368b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/fully-typed-forms-based-on-api-schema-with-react-hook-form-openapi-typescript-and-yup-validation-93ba1321368b">Fully typed forms based on API schema with react-hook-form,  openapi-typescript and yup validation</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Vite, Nginx and environment variables for a static website at runtime]]></title>
            <link>https://medium.com/quadcode-life/vite-nginx-and-environment-variables-for-a-static-website-at-runtime-f3d0b2995fc7?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/f3d0b2995fc7</guid>
            <category><![CDATA[nginx]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[environment-variables]]></category>
            <category><![CDATA[vites]]></category>
            <category><![CDATA[envsubst]]></category>
            <dc:creator><![CDATA[Dmitrii Pashkevich]]></dc:creator>
            <pubDate>Fri, 17 May 2024 07:44:31 GMT</pubDate>
            <atom:updated>2024-05-17T07:44:31.646Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/938/1*JIN2irSrv7J4-Zl-rdH0wg.jpeg" /></figure><p>Hello everyone! My name is Dmitry Pashkevich, and I’m a Frontend developer at Quadcode. Today I’ll share a method for passing environment variables to a statically built website using the <a href="https://vitejs.dev/">Vite</a> build tool in conjunction with the Nginx web server.</p><p>A common task in frontend development is passing environment variables to the application depending on the environment in which the application is running. It seems like a simple task, and everything is described in the <a href="https://vitejs.dev/guide/env-and-mode">documentation </a>— just place a <em>.env</em> file next to it and run the build… on each environment.</p><p>It seems like the solution has been found. But this leads to a situation where each environment has a different build process and a different result of this build.</p><p>Practice shows that problems arise with the functionality of the build steps. For example, when making changes, settings, scripts, etc. are forgotten to be updated for one of the environments. As a result, we encounter issues with the application itself, as the artifacts also differ.</p><p>Thus, it seems logical to obtain a single build artifact for all available environments and be able to pass environment variable values. Therefore, it is easier to troubleshoot one issue with variable values than to investigate build steps as well.</p><p>Now, let’s see how to do this using the Vite and Nginx tools as an example.</p><h3>Repository Preparation</h3><p>First, let’s create a project from the template provided by the Vite builder for React + Typescript.</p><pre>npm create vite@latest vite-nginx-dynamic-env-variables-example -- <br>--template react-ts &amp;&amp; cd vite-nginx-dynamic-env-variables-example &amp;&amp; npm <br>instal</pre><h3>Project Configuration Preparation</h3><p>After successfully executing the commands, let’s open the resulting project in our favorite IDE and start creating the target solution.</p><p>Let’s adjust the file src/vite-env.d.ts. We will add a description of the type of available environment variables to enable <a href="https://vitejs.dev/guide/env-and-mode.html#intellisense-for-typescript">IDE hinting</a>.</p><pre>/// &lt;reference types=&quot;vite/client&quot; /&gt;<br><br>interface ImportMetaEnv {<br>    readonly VITE_VERSION: string<br>}<br><br>interface ImportMeta {<br>    readonly env: ImportMetaEnv<br>}</pre><p>Now the IDE will provide hints about the available environment variables.</p><p>Next, let’s create a file with environment variable templates: <em>src/shared/projectEnvVariables.ts</em> and add the following content to it.</p><pre>type ProjectEnvVariablesType = Pick&lt;ImportMetaEnv, &#39;VITE_VERSION&#39;&gt;<br><br><br>// Environment Variable Template to Be Replaced at Runtime<br>const projectEnvVariables: ProjectEnvVariablesType = {<br>   VITE_VERSION: &#39;${VITE_VERSION}&#39;,<br>}<br><br><br>// Returning the variable value from runtime or obtained as a result of the build <br>export const getProjectEnvVariables = (): {<br>   envVariables: ProjectEnvVariablesType<br>} =&gt; {<br>   return {<br>       envVariables: {<br>           VITE_VERSION: !projectEnvVariables.VITE_VERSION.includes(&#39;VITE_&#39;) ? projectEnvVariables.VITE_VERSION : import.meta.env.VITE_VERSION,<br>       }<br>   }<br>}</pre><p>Next, it is necessary to make a change to the build configuration in <em>vite.config.ts</em> so that the file created above has a predictable name after the build stage. To do this, add a section with the configuration for rollup to the config.</p><pre>import { defineConfig } from &#39;vite&#39;<br>import react from &#39;@vitejs/plugin-react&#39;<br><br>// https://vitejs.dev/config/<br>export default defineConfig({<br>   plugins: [react()],<br>   build: {<br>       rollupOptions: {<br>           output: {<br>               format: &#39;es&#39;,<br>               globals: {<br>                   react: &#39;React&#39;,<br>                   &#39;react-dom&#39;: &#39;ReactDOM&#39;,<br>               },<br>               manualChunks(id) {<br>                   if (/projectEnvVariables.ts/.test(id)) {<br>                       return &#39;projectEnvVariables&#39;<br>                   }<br>               },<br>           },<br>       }<br>   }<br>}</pre><p>In the <a href="https://rollupjs.org/configuration-options/#output-manualchunks">manualChunks</a> section, we create a custom chunk and save part of its name so that after the build, we can find this file for substituting environment variables.</p><p>Let’s make changes to the <em>src/App.tsx</em> file to see the values of environment variables.</p><pre>import { getProjectEnvVariables } from &quot;./shared/projectEnvVariables.ts&quot;;<br><br>const { envVariables } = getProjectEnvVariables()<br><br>function App() {<br> return (<br>     &lt;&gt;<br>         &lt;h1&gt;VITE_VERSION&lt;/h1&gt;<br>         &lt;div&gt;{envVariables.VITE_VERSION}&lt;/div&gt;<br><br>         &lt;hr /&gt;<br><br>         &lt;h2&gt;import.meta.env.VITE_VERSION&lt;/h2&gt;<br>         &lt;div&gt;{import.meta.env.VITE_VERSION}&lt;/div&gt;<br>     &lt;/&gt;<br> )<br>}<br><br>export default App</pre><p>Next, let’s run the build to make sure that we obtain the necessary chunk for substituting variables after the build stage.</p><pre>npm run build</pre><p>After the build is complete, navigate to the <em>dist/assets</em> directory. You will see that a chunk named <em>projectEnvVariables*</em>, which we specified in the configuration above, exists.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/594/0*ktJM-uxP1B_MHsUh" /></figure><p>Next, let’s conduct a series of experiments.</p><p>For ease of understanding that the desired build result is obtained, each build will be performed with a specified environment variable. This will visually verify the condition for returning the value of the environment variable in the <em>getProjectEnvVariables</em> function.</p><p>For the first experiment, create a<em> .env</em> file in the project root with the following contents.</p><pre>VITE_VERSION=dev</pre><p>Let’s start the project build and the mode to view the build results.</p><pre>npm run build &amp;&amp; npm run preview</pre><p>Upon navigating to <a href="http://localhost:4173/">http://localhost:4173/</a>, you will see two identical values of the variable read from the config and directly from the environment variable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/740/0*v54K2xxJ6lxc8gNS" /></figure><p>For the second experiment, let’s replace the variable in the <em>dist/assets/projectEnvVariables-wa84hTgi.js</em> file, which was generated after building the application. Replace the line with the value ${VITE_VERSION} with <em>dev_from_env</em> in this file. After refreshing the page in the browser, you will get the updated version of the variable on the screen, read from the config <em>getProjectEnvVariables</em>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/740/0*i78JsfhPexEVriyi" /></figure><p>Everything works as expected! It’s time to automate variable substitution.</p><h3>Preparing Docker + Nginx Configuration</h3><p>We’ll demonstrate the automation of variable substitution using a Docker container containing the Nginx web server, which executes i<a href="https://www.nginx.com/resources/wiki/start/topics/examples/initscripts/">nitialization scripts</a> before startup, and substitutes environment variables using <a href="https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html">envsubst</a>.</p><p>Let’s create a .docker directory in the project root with the configuration contents for the Nginx web server that will serve the application. A complete example of the Nginx configuration can be found in the <a href="https://github.com/dipiash/vite-nginx-dynamic-env-variables-example">repository</a>, and below is the bash code of the <em>.docker/app/nginx/init-scripts/100-init-project-env-variables.sh</em> file, which replaces the environment variables.</p><pre>#!/usr/bin/env sh<br><br>set -ex<br><br><br># Find the file where environment variables need to be replaced.<br>projectEnvVariables=$(ls -t /usr/share/nginx/html/assets/projectEnvVariables*.js | head -n1)<br><br># Replace environment variables<br>envsubst &lt; &quot;$projectEnvVariables&quot; &gt; ./projectEnvVariables_temp<br>cp ./projectEnvVariables_temp &quot;$projectEnvVariables&quot;<br>rm ./projectEnvVariables_temp</pre><p>Next, in the project root, create a Dockerfile with the following content, which describes the application build and runs the Nginx web server to serve the static files.</p><pre>FROM node:20-alpine as builder<br><br>WORKDIR /app<br><br>COPY package.json package-lock.json ./<br><br>RUN npm ci<br><br>COPY . .<br><br>ARG NODE_ENV=production<br>ENV NODE_ENV=${NODE_ENV}<br><br>RUN npm run build<br><br>FROM nginx:alpine<br><br>ARG VITE_VERSION=dev<br>ENV VITE_VERSION=${VITE_VERSION}<br><br>ARG PORT=80<br>ENV NGINX_PORT=${PORT}<br>ENV NGINX_HOST=localhost<br><br>EXPOSE ${PORT}<br><br>COPY .docker/app/nginx/nginx.conf /etc/nginx/nginx.conf<br>COPY .docker/app/nginx/conf.d/ /etc/nginx/conf.d/<br>COPY .docker/app/entrypoint.sh /entrypoint.sh<br>COPY .docker/app/nginx/init-scripts/ /docker-entrypoint.d/<br><br>WORKDIR /usr/share/nginx/html<br><br>COPY --from=builder /app/dist ./</pre><p>Next, let’s build the container.</p><pre>docker build -t <br>vite-nginx-dynamic-env-variables-example .</pre><p>Next, let’s run the created container with a new value for the environment variable available in the application.</p><pre>docker run -p 81:80 -e VITE_VERSION=FROM_NGINX <br>vite-nginx-dynamic-env-variables-example</pre><p>Upon navigating to <a href="http://127.0.0.1:81">http://127.0.0.1:81</a> , we see that the environment variable is initialized with the current value, while the directly read environment variable remains with the old value.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/740/0*ukJd94zCo_IfhduJ" /></figure><h3>Conclusion</h3><p>This way, environment variables can be substituted into a statically built application at runtime, allowing for a unified build across all environments.</p><p>The code can be found in <a href="https://github.com/dipiash/vite-nginx-dynamic-env-variables-example">the repository</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f3d0b2995fc7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/vite-nginx-and-environment-variables-for-a-static-website-at-runtime-f3d0b2995fc7">Vite, Nginx and environment variables for a static website at runtime</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bloat in Postgresql. How to live with it?]]></title>
            <link>https://medium.com/quadcode-life/bloat-in-postgresql-how-to-live-with-it-46ebeca74587?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/46ebeca74587</guid>
            <category><![CDATA[posgresql]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[development]]></category>
            <dc:creator><![CDATA[Mikhail]]></dc:creator>
            <pubDate>Wed, 15 Nov 2023 19:43:41 GMT</pubDate>
            <atom:updated>2023-11-15T19:43:41.517Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wbqVTXxNlbsdBWfR6deiOA.jpeg" /></figure><p>You probably know about one of the unpleasant effects that arise during the operation of PostgreSQL — the table bloat effect. I want to tell you about how to prevent bloat and what to do if it has already appeared in your system.</p><p>Firstly, let’s briefly discuss the mechanism of its occurrence. To maintain data consistency, PostgreSQL uses the MVCC (Multiversion Concurrency Control) model. This means that each transaction sees a specific snapshot of the database at the moment the transaction begins. To make this possible, each row stores all its states because they might be needed by someone. And if you delete or update a row, in fact, the row remains in an unchanged state, but becomes invisible to new transactions. PostgreSQL has a VACUUM mechanism to clear the database, which marks the rows that are definitely no longer needed as free for use. However, “holes” remain inside the table, leading to bloat. Bloat in indexes occurs in a similar way.</p><h3>How to understand if your tables have bloat?</h3><p>Firstly, there are a large number of queries based on PostgreSQL statistics that provide estimated information about tables or indexes. You can choose the one you like; in general, they use the same data and should not differ significantly. I use queries from<a href="https://github.com/ioguix/pgsql-bloat-estimation"> this repository</a>. These queries work quite quickly since they do not fully scan the tables, but they can give some inaccuracies. To assess whether there are problems with your tables, this inaccuracy can be ignored.</p><p>However, if you need accurate information, you can use the <a href="https://www.postgresql.org/docs/current/pgstattuple.html">pgstattuple</a> extension. It provides information on the size of the table itself and how much space is occupied by “live” rows. The downside of this extension is that it fully scans the entire table, which can load your DB and take a considerable amount of time.</p><h3>Is Bloat in Tables a Problem?</h3><p>There is a clear problem associated with bloat: the table and its indexes take up a lot of disk space, which is never free. Sometimes bloat can account for up to 99% of the size of the table itself. This also relates to the second problem: when a table becomes bloated, the database needs to scan much more data for queries, which affects the performance of the database, sometimes quite significantly.</p><h3>How to Prevent Bloat</h3><ul><li>Avoid long transactions. Long transactions interfere with the vacuum’s work, or rather, they postpone the deletion of dead rows (because they are still visible to your long transaction). As a result, holes turn out to be bigger and take longer to clear. I recommend enabling logging for long transactions, so you can later understand from the logs what is causing the slowdown. This is controlled by the <a href="https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT">log_min_duration_statement </a>parameter. For example, if you add to postgresql.conf:</li></ul><pre>log_min_duration_statement=1000</pre><p>any queries that take longer than one second to execute will be logged.<br>To limit long queries, add a cron job that will prevent very long operations from executing, like this:</p><pre>/usr/bin/psql -xt -c “SELECT PG_TERMINATE_BACKEND(pid)FROM <br>pg_stat_activity WHERE xact_start &lt; NOW() — ’10 min’::INTERVAL <br>AND state != ‘idle’ AND usename != ‘postgres’”</pre><p>Also, PostgreSQL has a parameter <a href="https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-STATEMENT-TIMEOUT">statement_timeout</a>, but the documentation explicitly advises against setting it globally. Use it when you execute a query (perhaps manually, perhaps from an application) and want to control its execution time:</p><pre>BEGIN;<br>SET LOCAL statement_timeout = 1000;<br>SELECT …;<br>COMMIT;</pre><ul><li>Configure autovacuum. There are quite a few settings here, let’s consider some of them. By default, autovacuum triggers when more than 20% of the rows in a table have changed, but the larger the table, the longer these changes will accumulate. This is controlled by the <a href="https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR">autovacuum_vacuum_scale_factor</a> setting, and you can decrease it globally or for a specific table. Moreover, if you are configuring autovacuum for a specific table, in some cases you can even set autovacuum_vacuum_scale_factor to 0, while increasing <a href="https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-THRESHOLD">autovacuum_vacuum_threshold</a>:</li></ul><pre><br>alter table tablename set (autovacuum_vacuum_scale_factor = 0,<br>autovacuum_vacuum_threshold = 1000000); </pre><p>As a result of executing the command, autovacuum for this table will trigger after every million modified rows, regardless of the table size. <br>Ensure you have enough autovacuum workers. The <a href="https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-MAX-WORKERS">autovacuum_max_workers</a> setting determines the maximum number of workers, and if all of them are busy (by default, there are only 3), this can mean that some table already requires maintenance but isn’t getting it. The easiest way to check the number of active workers is with the following query:</p><pre>select COUNT(*) from pg_stat_activity where query like ‘autovacuum:%’</pre><p>If the autovacuum can’t keep up, that’s a significant problem. You need to increase the aforementioned autovacuum_max_workers, and there are also a number of steps you can take to speed up autovacuum’s work. I won’t go into all the settings in detail, but I will list the ones to pay attention to:</p><ol><li><a href="https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM">maintenance_work_mem</a> — increase if the server’s memory allows.</li><li><a href="https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-COST-DELAY">autovacuum_vacuum_cost_delay</a> — decrease if disk performance allows.</li><li><a href="https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-LIMIT">vacuum_cost_limit</a> — increase if disk performance allows.</li><li><a href="https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-NAPTIME">autovacuum_naptime</a> — decrease if disk performance allows.</li></ol><ul><li>PostgreSQL has a built-in mechanism that helps prevent bloat — <a href="https://www.postgresql.org/docs/current/storage-hot.html">HOT</a> (Heap Only Tuples). Thanks to this mechanism, first, new entries do not appear in indexes for updated rows, and for tables, rows can be marked as free even without performing a vacuum on the table, which indirectly also reduces bloat. But there are two important limitations. First, the updated columns must not be part of any index on the table. Second, the new (updated) row must be on the same page as the old one, meaning there must be enough space. This can be controlled by setting the fillfactor parameter for the required tables:</li></ul><pre>ALTER TABLE tablename SET (fillfactor = 70);</pre><p>I recommend that you explore this feature for tables that are frequently updated, but I must note that the benefits of using this mechanism will depend on the specific conditions. For example, I personally couldn’t achieve significant improvements for tables where I tried to play with settings for HOT.</p><ul><li>In another <a href="https://medium.com/quadcode-life/features-of-working-with-postgresql-62c5a4224627#2299">article about working with PostgreSQL for developers,</a> I discussed several nuances that also lead to an increase in bloat, which you can’t influence from the PostgreSQL side, but you can influence from the application side. I provided recommendations for effective work there, and I recommend checking it out as well.</li></ul><h3>How to deal with Bloat</h3><p>When bloat has already occurred for your tables, you obviously want to get rid of it. Here are several methods we use:</p><h3>VACUUM</h3><p>The obvious method provided by the database itself is to manually execute the VACUUM FULL command. Unlike a regular VACUUM, this operation completely rewrites the table and indexes, and the holes are closed. However, this operation requires a lock on the table, so its use is usually not recommended. But it is not prohibited, and in some cases, you can do it quite safely. First, you might have maintenance windows during which you can maintain the database without affecting the application, then you just need to make sure that you fit into this window. Second, if you have a small table, perhaps its maintenance will take seconds, and you can afford to run VACUUM FULL (I recommend running it with a specified statement_timeout, as I showed above).</p><h3>REINDEX CONCURRENTLY</h3><p>This won’t help you get rid of bloat in tables, but it will help get rid of bloat in indexes, and in some cases, that’s enough. Given that this functionality is built-in and safe to apply, it’s worth keeping in mind. Be sure to call the command with the CONCURRENTLY option. Otherwise, the database will take a lock on the table:</p><pre>REINDEX TABLE CONCURRENTLY my_table</pre><h3><a href="https://reorg.github.io/pg_repack/">pg_repack</a></h3><p>This is a quite popular utility that also completely recreates the table, but unlike VACUUM FULL, it does so without long locks on the table. At the start, a table with a log of changes in the table is created, and the table where the data will be copied is also created. After copying is complete, all changes from the log are applied, and the new table replaces the original. Installation and usage example:</p><pre>sudo apt-get install postgresql-15-repack<br>sudo -u postgres psql -c “CREATE EXTENSION pg_repack” -d my_db<br>sudo -u postgres pg_repack -d my_db -t my_table1 -t my_table2</pre><p>You can run pg_repack without specifying tables, but in this case, all tables in the database will be processed, and it’s unlikely that all your tables need to be recreated. Especially since pg_repack has nuances in its operation that you should be aware of:</p><ul><li>pg_repack creates a copy of the table with its indexes, so you need to have at least enough space on your DB server for another table. Since fighting bloat often starts when disk space is already running out, this can be a problem. I advise first finding small tables that can be compressed and processing them, and then moving on to <strong>larger</strong> tables.</li><li>pg_repack cannot process any table. It requires that a table has a Primary Key or unique indexes. An index always creates an additional load on the database, so I cannot recommend creating an index just for pg_repack’s operation. Of course, you can create it and delete it after maintenance, or you can use other methods.</li><li>pg_repack creates a significant load on the disk. Since its main advantage is that it does not require prolonged locks, it is used when the database is under production load, and the additional disk load will lead to the degradation of database and application performance. In theory, you can try to reduce the I/O priority of the process copying the table using the `ionice` utility. But, for example, in modern Ubuntu, the default I/O scheduler is mq-deadline, which does not allow you to change the priority. Or choose a time of the lowest load on the database.</li><li>In addition to the disk load, network load will be created to copy the generated WAL files to your replicas. In our case, we easily hit two gigabits, after which interesting effects start to occur. Of course, the replica begins to lag. In itself, this is not a problem, but if your application also connects to the replica and expects up-to-date data there, it may start to work incorrectly. But in addition to this, at some point, the WALs on your master start to rotate, and it is quite possible that the replica has not yet managed to read them, and the master has already deleted them. As a simple solution, you can increase the wal_keep_size parameter on the master so that it stores enough data for the replica to keep up. As for the applications — we redirected all traffic to the master during maintenance. Later, we solved the problem more radically — we installed 10GB network cards on such servers, but this is usually not the fastest solution.</li><li>The utility creates long transactions. I have already written above about why this can be bad. It is impossible to influence this, but it must be kept in mind.</li></ul><h3><a href="https://github.com/dataegret/pgcompacttable">pgcompacttable</a></h3><p>This utility fights bloat using the same mechanisms that create it. If you update a row without changing the column value, the new version will be written to a free space in the database (of course, provided that the conditions for HOT are not met, as I wrote about earlier). So, if you update the rows page by page, the pages will be freed up. This is exactly what pgcompacttable does, leaving unoccupied pages at the end of the table. Under such conditions, vacuum can remove this end of the table and free up space. Example of use:</p><pre>curl -s https://raw.githubusercontent.com/dataegret/pgcompacttable/master/bin/pgcompacttable -o pgcompacttable<br>chmod +x pgcompacttable<br>sudo apt-get install libdbi-perl libdbd-pg-perl<br>sudo -u postgres psql -c “create extension if not exists pgstattuple” -d mydb<br>sudo -u postgres pgcompacttable -h /var/run/postgresql -d my_db</pre><p>Let me give you a more detailed comparison of the usage differences with pg_repack:</p><ul><li>The utility collects bloat statistics using the pgstattuple extension, so you can run it across the entire database, and it will decide for itself what needs to be “compacted” and what does not.</li><li>pgcompacttable can limit its operational speed, meaning the impact on the disk can be regulated. However, I should note that this applies only directly to the process of updating rows; the vacuum, reindex procedures, and statistics collection using pgstattuple cannot be limited by the utility.</li><li>You won’t need a lot of additional disk space. All manipulations are performed in the same table, so the table will not grow. However, indexes are recreated using built-in Postgres tools, and additional space will be required for them.</li></ul><p>Yet, there are also special considerations that you need to be aware of.</p><ul><li>I mentioned that the utility can only regulate the load it directly generates. At the same time, a bloat assessment using pgstattuple is called for each table, and for large tables, this process is lengthy and disk-intensive as the extension scans every page in the table. In such cases, I replaced the use of the pgstattuple function with pgstattuple_approx. The utility is written in Perl, and it is quite simple to modify. You can use my <a href="https://github.com/burochkin/pgcompacttable">fork</a>, which already supports this. As for the load from vacuum, the easiest way is to adjust the <a href="https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-DELAY">vacuum_cost_delay</a> parameter — by default, it is 0, which means the vacuum is not limited in any way.</li><li>Just like with pg_repack, the issue of generating a large number of WAL files and potential replication lag is relevant — you need to monitor this.</li><li>You also need to watch out for long-running transactions.</li><li>The vacuum has an unpleasant feature where it “trims” the end of the table (which is the expected behavior when working with pgcompacttable). When applying these changes to a replica, the table on which this occurs is locked for reading (on the replica, of course). This effect can be minimized if vacuum is called during operation, not just at the end of row transfer (controlled by the — routine-vacuum), but, unfortunately, this does not always help. The vacuum for trimming the table takes short-term locks, and if there is intensive writing to the table, it may not complete this process. And this means that the table may later be trimmed by autovacuum, and this can happen at any moment.</li></ul><p>Of course, these utilities have a large number of settings that I am not covering here, because they are not directly related to the features I have described. I strongly recommend that you approach their launch very carefully, monitor the metrics of the database and application, because even considering that you now know about their features, they can still impact the performance of your applications.</p><p>I hope that this article will help you combat bloat in your databases, or even better, prevent its occurrence in the first place. Feel free to ask your questions in the comments, I will be happy to answer them.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=46ebeca74587" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/bloat-in-postgresql-how-to-live-with-it-46ebeca74587">Bloat in Postgresql. How to live with it?</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Improving development productivity: the magic of a unified ESLint configuration]]></title>
            <link>https://medium.com/quadcode-life/improving-development-productivity-the-magic-of-a-unified-eslint-configuration-e32aa71b063b?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/e32aa71b063b</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[tutorial]]></category>
            <category><![CDATA[eslint-config]]></category>
            <category><![CDATA[js-tutorial]]></category>
            <category><![CDATA[eslint]]></category>
            <dc:creator><![CDATA[Dmitrii Pashkevich]]></dc:creator>
            <pubDate>Fri, 22 Sep 2023 16:57:17 GMT</pubDate>
            <atom:updated>2023-09-22T16:57:17.716Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cJlW1Fld56PPVrj_" /></figure><p>Hello everyone! My name is Dmitry Pashkevich, and I’m a Frontend developer at Quadcode. This article isn’t just a tutorial on creating a unified ESLint configuration that can be reused between projects. It’s a story about solving the pain of discussions about code formatting from project to project during reviews.</p><p>The article will be useful for developers who want to standardize their approach to code formatting across different projects; looking for a proven solution for codebase standardization.</p><p>[SPOILER ALERT] You can review the code in our<a href="https://github.com/dipiash/eslint-plugin-nimbus-clean"> repository on GitHub</a> repository as well as find the package in <a href="https://www.npmjs.com/package/eslint-plugin-nimbus-clean">NPM</a>.</p><h3>Why do we need a unified ESLint plugin/configuration?</h3><p>Uniform code formatting in a team reduces the mental load during code reviews, reading/writing code, or starting a new project. It allows one to focus on how the code works rather than being distracted by how semicolons are placed.</p><p>Imagine you have 5 projects, and each has its own formatting rules. You start the 6th and copy configs from previous projects, adding new rules. And so on in a loop. We end up with inconsistent ESLint configs in all projects and, as follows, inconsistently formatted code between projects. As a result, simple things are discussed over and over during code reviews.</p><p>In this article, I will explain how to write a plugin/configuration for ESLint and publish it as a package. This will allow you to correct, add, and change the necessary rules in one place and connect them as a single module in other projects.</p><p>In our Quadcode team, we use publication to a private registry, but in this article, you’ll see publication to NPM (in the source code).</p><h3>Repository Preparation</h3><p>So, the first thing we’ll start with is creating a template for the ESLint plugin project.</p><p>To do this, go to the <a href="https://eslint.org/docs/latest/extend/plugins#create-a-plugin">ESLint documentation</a>, the “Create Plugin” section, and follow the recommendation for creating a new project. Navigate to the<a href="https://www.npmjs.com/package/generator-eslint"> installation section</a> and perform the required actions.</p><p>Open the command line.</p><h3>Installing Node.js</h3><p>If you haven’t installed the <a href="https://nodejs.org/en">Node.js</a> platform yet, you need to do so.</p><h3>Installing Yeoman</h3><p>Next, install <a href="https://yeoman.io/">Yeoman</a> — a tool for generating template projects, if you haven’t installed it yet.</p><pre>npm i -g yo</pre><h3>Installing the ESLint Plugin Generator</h3><p>Next, we’ll install a utility to generate an ESLint plugin.</p><pre>npm i -g generator-eslint</pre><p>Great! All preliminary work is done, now it’s time to create the base project for our plugin.</p><h3>Creating the Project Directory</h3><p>Let’s create a directory for our project.</p><pre>mkdir eslint-plugin-nimbus-clean</pre><p>And navigate into it.</p><pre>cd ./eslint-plugin-nimbus-clean</pre><p>Next, we’ll set up the project structure.</p><pre>yo eslint:plugin</pre><p>This command will start the wizard for creating an ESLint plugin project.</p><h3>Answering the Setup Wizard’s Questions</h3><p>Let’s go through a short survey.</p><pre>? What is your name? dipiash<br>? What is the plugin ID? nimbus-clean<br>? Type a short description of this plugin: A comprehensive linting solution that sweeps your code clean<br>? Does this plugin contain custom ESLint rules? No<br>? Does this plugin contain one or more processors? No</pre><p>The last two questions were answered with “No” because at this stage, we won’t be using any custom rules or custom processors. Instead, we will use a certain combination of other plugins.</p><p>Wait for the generator to create the starting project and open the resulting project in your IDE.</p><h3>Setting Up .gitignore File</h3><p>Next, we will create a “.gitignore” file to exclude unnecessary files from being committed to the repository.</p><pre>touch .gitignore</pre><p>To avoid drafting this file’s content from scratch, I always use the service: <a href="https://www.toptal.com/developers/gitignore">https://www.toptal.com/developers/gitignore</a>. You can also find plugins for your IDE that allow you to generate this file directly there.</p><p>We’re interested in “.gitignore” for Node.js — take the content from <a href="https://www.toptal.com/developers/gitignore/api/node">the link</a> and add it to the “.gitignore” file we created earlier.</p><h3>Initializing git</h3><p>Initialize the git repository.</p><pre>git init</pre><h3>Project Preparation</h3><p>Let’s make changes to the created project.</p><p>In the future, we might need to write custom rules. Let’s immediately add a <a href="https://www.npmjs.com/package/eslint-plugin-eslint-plugin">plugin</a> for linting ESLint rules and connect it as indicated in the documentation.</p><pre>npm install eslint-plugin-eslint-plugin - save-dev</pre><p>In the package.json file, in the “scripts” section, add two commands: “build” and “pack”.</p><p>The “build” command will compile our project.</p><pre>rm -rf ./dist &amp;&amp; mkdir ./dist &amp;&amp; cp -r ./lib/* ./dist</pre><p>The “pack” command will be needed to locally check the plugin’s operation.</p><pre>npm pack - pack-destination=./dist</pre><p>We should also adjust the sections: “main”, “exports”, and “files”, as the content for npm publication will be located in the “dist” directory.</p><pre>&quot;main&quot;: &quot;./dist/index.js&quot;,<br>&quot;exports&quot;: &quot;./dist/index.js&quot;,<br>&quot;files&quot;: [<br>  &quot;/dist&quot;,<br>  &quot;README.md&quot;,<br>  &quot;package.json&quot;<br>]</pre><p>Additionally, let’s edit the “lib/index.js” file. We won’t need the <strong>require index</strong> package, so remove that part of the code.</p><h3>ESLint Plugin vs. ESLint Config</h3><p>When setting up ESLint, one can often encounter packages with names starting with: “eslint-plugin-*” and “eslint-config-*”. So, what’s the difference?</p><p>Plugins must be named as “eslint-plugin-*”. When adding a plugin to your project, the rules won’t be enabled automatically, and therefore you will need to enable each rule individually.</p><p>Configs must be named as “eslint-config-*”. When adding a config to your project, all rules will be enabled automatically, and you won’t need to enable each rule individually.</p><p>In practice, plugins are needed if you are creating your own code linting rules and want the plugin users to be able to turn them on or off by themselves. In all other cases, you can use a config since it will most likely simply reuse a set of configurations from other plugins.</p><p>However, a plugin can also be used as a config with general rules enabled by default. In this article, we will consider such a plugin version that includes a default configuration (recommended). Often in the documentation for plugins, one can see that such plugins are connected to the “extends” section as “plugin:your-plugin-name/recommended” — more details can be found in the <a href="https://eslint.org/docs/latest/extend/plugins#configs-in-plugins">ESLint documentation</a>.</p><h3>Set of Configs / Plugins</h3><p>Next, let’s determine the main plugins that we will use in our projects.</p><h3>ESLint</h3><p>This is directly the <a href="https://www.npmjs.com/package/eslint">eslint</a> itself, from which we will take the <a href="https://eslint.org/docs/latest/use/getting-started#configuration">recommended config</a>.</p><h3>Prettier</h3><p>So as not to use a separate configuration for <a href="https://prettier.io/">Prettier</a>, we’ll add to our ESLint plugin/config the rules from the packages:</p><ul><li><a href="https://www.npmjs.com/package/eslint-config-prettier">eslint-config-prettier</a> — disables all rules that are unnecessary or might conflict with Prettier.</li><li><a href="https://www.npmjs.com/package/eslint-plugin-prettier">eslint-plugin-prettier</a> — allows us to set up Prettier as ESLint rules and will display information about problems as ESLint issues.</li></ul><h3>Imports</h3><p>Let’s set up rules for working with import/export in our code.</p><ul><li><a href="https://www.npmjs.com/package/eslint-plugin-import">eslint-plugin-import</a> — will allow us to avoid various issues when importing/exporting modules in the code.</li><li><a href="https://www.npmjs.com/package/eslint-import-resolver-typescript">eslint-import-resolver-typescript</a> — will add TypeScript support for the previous plugin.</li><li><a href="https://www.npmjs.com/package/eslint-plugin-simple-import-sort">eslint-plugin-simple-import-sort</a> — will allow us to configure module sorting in the desired order according to certain rules.</li></ul><h3>React</h3><p>Since all our projects are written using React, we will naturally add linting support for code written with React.</p><ul><li><a href="https://www.npmjs.com/package/eslint-plugin-react">eslint-plugin-react</a> — rules for linting React code.</li><li><a href="https://www.npmjs.com/package/eslint-plugin-react-hooks">eslint-plugin-react-hooks</a> — will help us adhere to the rules for writing React Hooks.</li><li><a href="https://www.npmjs.com/package/eslint-plugin-testing-library">eslint-plugin-testing-library</a> — will check the code of our tests for the Testing Library.</li><li><a href="https://www.npmjs.com/package/eslint-plugin-jsx-a11y">eslint-plugin-jsx-a11y</a> — will check if we have added accessibility rules to our JSX elements or not.</li></ul><h3>Typescript</h3><p>Since the entire codebase is written in TypeScript, we will add rules for linting TypeScript code — <a href="https://www.npmjs.com/package/@typescript-eslint/eslint-plugin">@typescript-eslint/eslint-plugin</a>.</p><h3>Promises</h3><p>We will add a plugin for linting code that works with Promises — <a href="https://www.npmjs.com/package/eslint-plugin-promise">eslint-plugin-promise</a>.</p><h3>Code quality</h3><p>And we will add two final plugins for linting code quality.</p><ul><li><a href="https://www.npmjs.com/package/eslint-plugin-sonarjs">eslint-plugin-sonarjs</a> — will help identify potential bugs and the use of suspicious patterns in the code.</li><li><a href="https://www.npmjs.com/package/eslint-plugin-unicorn">eslint-plugin-unicorn</a> — more than 100 useful rules for ESLint.</li></ul><p>We will install all the packages listed above and add them as peerDependencies in package.json.</p><pre>npm i -D eslint-import-resolver-typescript @typescript-eslint/eslint-plugin <br>eslint-config-prettier eslint-plugin-import eslint-plugin-jsx-a11y eslint-<br>plugin-prettier eslint-plugin-promise eslint-plugin-react eslint-plugin-react-<br>hooks eslint-plugin-simple-import-sort eslint-plugin-sonarjs eslint-<br>plugin-testing-library eslint-plugin-unicorn</pre><p>Also, to ensure the user sees the entire list of missing projects when installing our package, we will use the directive in package.json <a href="https://docs.npmjs.com/cli/v9/configuring-npm/package-json#peerdependenciesmeta">peerDependenciesMeta</a> and mark each dependency as `optional: false`.</p><h3>Rules for ESLint</h3><p>In this section, I will describe the inclusion of rules for each section from the previous one — I will describe in the same order.</p><p>Let’s create a `rules` directory in `lib` — where files for each part of the config will be located.</p><pre>mkdir ./rules/lib</pre><h3>ESLint</h3><p>Let’s create a file to describe the ESLint configuration.</p><pre>touch lib/rules/common.js</pre><p>And add the rules there.</p><pre>/** eslint */<br>module.exports = {<br>    // https://eslint.org/docs/latest/rules/curly<br>    &quot;curly&quot;: [&quot;error&quot;, &quot;all&quot;],<br>    // https://eslint.org/docs/latest/rules/padding-line-between-statements<br>    &quot;padding-line-between-statements&quot;: [<br>        &quot;error&quot;,<br>        { &quot;blankLine&quot;: &quot;always&quot;, &quot;prev&quot;: [&quot;const&quot;, &quot;let&quot;, &quot;var&quot;], &quot;next&quot;: &quot;*&quot; },<br>        { &quot;blankLine&quot;: &quot;any&quot;, &quot;prev&quot;: [&quot;const&quot;, &quot;let&quot;, &quot;var&quot;], &quot;next&quot;: [&quot;const&quot;, &quot;let&quot;, &quot;var&quot;] },<br>        { &quot;blankLine&quot;: &quot;always&quot;, &quot;prev&quot;: &quot;*&quot;, &quot;next&quot;: &quot;return&quot; }<br>    ],<br>    // https://eslint.org/docs/latest/rules/no-multiple-empty-lines<br>    &quot;no-multiple-empty-lines&quot;: [&quot;error&quot;],<br>    // https://eslint.org/docs/latest/rules/arrow-body-style<br>    &quot;arrow-body-style&quot;: [&quot;error&quot;, &quot;as-needed&quot;],<br>    // https://eslint.org/docs/latest/rules/prefer-arrow-callback<br>    &quot;prefer-arrow-callback&quot;: &quot;off&quot;,<br>    // https://eslint.org/docs/latest/rules/no-console<br>    &quot;no-console&quot;: [&quot;error&quot;, { &quot;allow&quot;: [&quot;warn&quot;, &quot;info&quot;, &quot;error&quot;] }],<br>    // https://eslint.org/docs/latest/rules/no-underscore-dangle<br>    &quot;no-underscore-dangle&quot;: [<br>        &quot;error&quot;,<br>        {<br>            &quot;allow&quot;: [&quot;_id&quot;, &quot;__typename&quot;, &quot;__schema&quot;, &quot;__dirname&quot;, &quot;_global&quot;],<br>            &quot;allowAfterThis&quot;: true<br>        }<br>    ],<br>}</pre><h3>Prettier</h3><p>Let’s create a file to describe the Prettier configuration.</p><pre>touch lib/rules/prettier.js</pre><p>Add the rules.</p><pre>/** eslint-plugin-prettier */<br>module.exports = {<br>    &quot;prettier/prettier&quot;: &quot;error&quot;,<br>}</pre><h3>Imports</h3><p>Let’s create a file to describe the configuration for “eslint-plugin-import” and “eslint-plugin-simple-import-sort”.</p><pre>touch lib/rules/import.js</pre><pre>touch lib/rules/simple-import-sort.js</pre><p>Add the rules.</p><pre>/** eslint-plugin-import */<br>module.exports = {<br>    // https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/first.md<br>    &quot;import/first&quot;: &quot;error&quot;,<br>    // https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/newline-after-import.md<br>    &quot;import/newline-after-import&quot;: &quot;error&quot;,<br>    // https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/no-duplicates.md<br>    &quot;import/no-duplicates&quot;: &quot;error&quot;,<br>    // https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/prefer-default-export.md<br>    &quot;import/prefer-default-export&quot;: &quot;off&quot;,<br>    // https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/no-anonymous-default-export.md<br>    &quot;import/no-anonymous-default-export&quot;: [<br>        &quot;error&quot;,<br>        {<br>            &quot;allowArray&quot;: false,<br>            &quot;allowArrowFunction&quot;: false,<br>            &quot;allowAnonymousClass&quot;: false,<br>            &quot;allowAnonymousFunction&quot;: false,<br>            &quot;allowCallExpression&quot;: true,<br>            &quot;allowLiteral&quot;: false,<br>            &quot;allowObject&quot;: true<br>        }<br>    ],<br>    // https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/no-unassigned-import.md<br>    &quot;import/no-unassigned-import&quot;: &quot;off&quot;,<br>    // https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/no-unused-modules.md<br>    &quot;import/no-unused-modules&quot;: &quot;error&quot;<br>}</pre><h3>React</h3><p>Let’s create a configuration file for React.</p><pre>touch lib/rules/react.js</pre><p>Add the rules.</p><pre>/** eslint-plugin-react-* */<br>module.exports = {<br>    // https://github.com/jsx-eslint/eslint-plugin-react/blob/master/docs/rules/prop-types.md<br>    &quot;react/prop-types&quot;: &quot;off&quot;,<br>    // https://github.com/facebook/react/blob/main/packages/eslint-plugin-react-hooks/README.md<br>    &quot;react-hooks/exhaustive-deps&quot;: [2],<br>}</pre><h3>TypeScript</h3><p>Let’s create a configuration file for TypeScript.</p><pre>touch lib/rules/typescript.js</pre><p>Add the rules.</p><pre>/** @typescript-eslint-* */<br>module.exports = {<br>    // https://typescript-eslint.io/rules/no-use-before-define/<br>    &quot;@typescript-eslint/no-use-before-define&quot;: [&quot;error&quot;],<br>    // https://typescript-eslint.io/rules/no-unused-vars/<br>    &quot;@typescript-eslint/no-unused-vars&quot;: [<br>        &quot;error&quot;<br>    ],<br>    // https://typescript-eslint.io/rules/no-explicit-any/<br>    &quot;@typescript-eslint/no-explicit-any&quot;: &quot;error&quot;,<br>    // https://typescript-eslint.io/rules/naming-convention/<br>    &quot;@typescript-eslint/naming-convention&quot;: [<br>        &quot;error&quot;,<br>        {<br>            &quot;selector&quot;: &quot;interface&quot;,<br>            &quot;format&quot;: [&quot;PascalCase&quot;],<br>            &quot;custom&quot;: {<br>                &quot;regex&quot;: &quot;[A-Za-z]Interface$&quot;,<br>                &quot;match&quot;: true<br>            }<br>        },<br>        {<br>            &quot;selector&quot;: &quot;typeAlias&quot;,<br>            &quot;format&quot;: [&quot;PascalCase&quot;],<br>            &quot;custom&quot;: {<br>                &quot;regex&quot;: &quot;[A-Za-z]Type$&quot;,<br>                &quot;match&quot;: true<br>            }<br>        }<br>    ],<br>    // https://typescript-eslint.io/rules/ban-types/<br>    &quot;@typescript-eslint/ban-types&quot;: [<br>        &quot;error&quot;,<br>        {<br>            &quot;types&quot;: {<br>                // un-ban a type that&#39;s banned by default<br>                &quot;{}&quot;: false<br>            },<br>            &quot;extendDefaults&quot;: true<br>        }<br>    ]<br>}</pre><h3>Promises</h3><p>Let’s create a configuration file for Promises.</p><pre>touch lib/rules/promise.js</pre><p>Add the rules.</p><pre>/** eslint-plugin-promise */<br>module.exports = {<br>    // https://github.com/eslint-community/eslint-plugin-promise/blob/main/docs/rules/prefer-await-to-then.md<br>    &quot;promise/prefer-await-to-then&quot;: &quot;off&quot;,<br>    // https://github.com/eslint-community/eslint-plugin-promise/blob/main/docs/rules/always-return.md<br>    &quot;promise/always-return&quot;: &quot;off&quot;,<br>    // https://github.com/eslint-community/eslint-plugin-promise/blob/main/docs/rules/catch-or-return.md<br>    &quot;promise/catch-or-return&quot;: [2, { &quot;allowThen&quot;: true, &quot;allowFinally&quot;: true }],<br>}</pre><h3>Code Quality</h3><p>Let’s create a configuration file for ESLint.</p><pre>touch lib/rules/sonarjs.js</pre><pre>touch lib/rules/unicorn.js</pre><p>Add the rules.</p><pre>/** eslint-plugin-sonarjs */<br>module.exports = {<br>    // https://github.com/SonarSource/eslint-plugin-sonarjs/blob/master/docs/rules/no-identical-functions.md<br>    &quot;sonarjs/no-identical-functions&quot;: [&quot;error&quot;, 5],<br>}</pre><pre>/** eslint-plugin-unicorn */<br>module.exports = {<br>    // https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/docs/rules/no-array-reduce.md<br>    &quot;unicorn/no-array-reduce&quot;: &quot;off&quot;,<br>    // https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/docs/rules/prefer-module.md<br>    &quot;unicorn/prefer-module&quot;: &quot;off&quot;,<br>    // https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/docs/rules/no-null.md<br>    &quot;unicorn/no-null&quot;: &quot;off&quot;,<br>    // https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/docs/rules/no-useless-undefined.md<br>    &quot;unicorn/no-useless-undefined&quot;: &quot;off&quot;,<br>    // https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/docs/rules/filename-case.md<br>    &quot;unicorn/filename-case&quot;: [<br>        &quot;error&quot;,<br>        {<br>            &quot;cases&quot;: {<br>                &quot;pascalCase&quot;: true,<br>                &quot;camelCase&quot;: true<br>            },<br>            &quot;ignore&quot;: [<br>                &quot;next-env.d.ts&quot;,<br>                &quot;vite(st)?.config.ts&quot;,<br>                &quot;vite-environment.d.ts&quot;,<br>                &quot;\\.spec.ts(x)?&quot;,<br>                &quot;\\.types.ts(x)?&quot;,<br>                &quot;\\.stories.ts(x)?&quot;,<br>                &quot;\\.styled.ts(x)?&quot;,<br>                &quot;\\.styles.ts(x)?&quot;,<br>            ]<br>        }<br>    ],<br>    // https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/docs/rules/prevent-abbreviations.md<br>    &quot;unicorn/prevent-abbreviations&quot;: [<br>        &quot;error&quot;,<br>        {<br>            &quot;checkFilenames&quot;: false<br>        }<br>    ],<br>}</pre><h3>Publishing an NPM package</h3><p>To publish the package, we will use the following utilities:</p><ul><li><a href="https://github.com/release-it/release-it">release-it </a>+ <a href="https://github.com/release-it/bumper">@release-it/bumper</a> + <a href="https://github.com/release-it/conventional-changelog">@release-it/conventional-changelog</a></li><li><a href="https://www.npmjs.com/package/@commitlint/cli">@commitlint/cli</a> + <a href="https://www.npmjs.com/package/@commitlint/config-conventional">@commitlint/config-conventional</a></li><li><a href="https://www.npmjs.com/package/commitizen">commitizen</a> + <a href="https://www.npmjs.com/package/cz-git">cz-git </a>+ <a href="https://www.npmjs.com/package/cz-conventional-changelog">cz-conventional-changelog</a></li></ul><p>These utilities will allow us to version our ESLint plugin according to SemVer and describe commits in such a way that it occurs automatically and a <a href="https://github.com/dipiash/eslint-plugin-nimbus-clean/blob/main/CHANGELOG.md">CHANGELOG</a> is generated. You can see the entire setup in the <a href="https://github.com/dipiash/eslint-plugin-nimbus-clean">repository</a>.</p><h3>Integration into a Project</h3><p>After publishing to NPM, you can install and integrate the <a href="https://www.npmjs.com/package/eslint-plugin-nimbus-clean">package</a> into any of your projects.</p><pre>npm i eslint-plugin-nimbus-clean</pre><p>Setting up the ESLint configuration.</p><pre>{<br>    &quot;extends&quot;: [<br>      &quot;plugin:nimbus-clean/recommended&quot;<br>    ]<br>}</pre><p>Detailed instructions are described in the project’s <a href="https://github.com/dipiash/eslint-plugin-nimbus-clean/blob/main/README.md">README</a>.</p><h3>Conclusion</h3><p>This is my experience in creating custom configurations and plugins for ESLint and publishing them to NPM.</p><p>Using this approach, you can create the desired configuration for your projects once and then reuse it. If you need to make some changes to the ESLint config, it only needs to be done in one place, and in the projects, you just need to update the version if necessary.</p><p>What else would you recommend adding to this plugin?</p><p>You can see all the code in the <a href="https://github.com/dipiash/eslint-plugin-nimbus-clean">Github repository</a>, and the package can be found on <a href="https://www.npmjs.com/package/eslint-plugin-nimbus-clean">NPM</a>. I will be glad if you can star the repository:) Feel free to ask me any questions in the comments below.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e32aa71b063b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/improving-development-productivity-the-magic-of-a-unified-eslint-configuration-e32aa71b063b">Improving development productivity: the magic of a unified ESLint configuration</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Features of Working with Postgresql]]></title>
            <link>https://medium.com/quadcode-life/features-of-working-with-postgresql-62c5a4224627?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/62c5a4224627</guid>
            <category><![CDATA[database]]></category>
            <category><![CDATA[postgres]]></category>
            <category><![CDATA[postgresql]]></category>
            <dc:creator><![CDATA[Mikhail]]></dc:creator>
            <pubDate>Fri, 25 Aug 2023 14:22:51 GMT</pubDate>
            <atom:updated>2023-08-25T14:22:51.841Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UxU5g1Xc4JNlBNes" /></figure><p>We love postgres very much! We love it and know how to manage it. At the moment, the number of postgres clusters in our production environment is approaching 200, and we have accumulated some experience in working with it that I would like to share.</p><p>I will talk about the problems our developers encountered while working with postgresql, which were related to the specificities of this database (including the specifics of our installations). I must clarify that just because something may lead to problems does not mean it should not be done, but it is important to understand how the database works, how it can behave, and how to prevent issues that may occur with it.</p><p>Since the target audience of this article is developers, I assume that you will not need to set up the database yourself, so I do not provide specific instructions on the parameters discussed.</p><p>I’ll say a few words about our installations — there is no unique configuration here, but it is also necessary to consider.</p><p>We use a connection pooler — pgbouncer, which is installed on each host with the DB, and applications can only work through it. This pooler has an important setting that determines when a connection to the server can be reused. The default setting — session, means that the connection will be kept by the client until it disconnects, but we set it to transaction, which means that after the client completes the transaction, the connection to the server will be returned to the pool and can be reused. This greatly increases the benefit of using pgbouncer because we can: 1) limit the number of connections to postgresql (each connection is relatively expensive for the database); 2) we have greater flexibility in database maintenance (through pgbouncer we can pause queries to the DB or transparently redirect queries to another DB).</p><p>Developers can create dev environments on their own, so sometimes an application during development might connect directly to the database, and the specifics of working with pgbouncer might not be considered. Let’s discuss some of these challenges.</p><h3>Prepared Statements</h3><p>Even if a developer does not explicitly use prepared statements, their framework might do so. However, prepared statements cannot work with pgbouncer because the query itself is sent first, and the data for it arrives as a separate transaction that can end up in a different session — in this case the database will show an error. There are workarounds for almost any library. For example, for the popular golang library lib/pq, you need to add the parameter “binary_parameters=yes” to the database connection string.</p><h3>Session Parameter Changes</h3><p>Other problems with pgbouncer are also related to the reuse of connections. For example, if you have an application component that changes the search_path for its operation, it will affect its neighbors who were expecting the default search_path. When they call functions or tables without specifying a schema, they will get an error. A more trivial case is when someone connects to the database using the application’s credentials through a bouncer, and the client sets the session parameter to “read-only”. As a result, all queries going through such a connection will not have write permission.</p><p>There are also some PostgreSQL features that simply cannot work through pgbouncer. For example, we encountered the impossibility of using advisory locks and listen/notify. In exceptional cases, we compromise — we set up two pgbouncers, one in session mode, the other in transaction mode. In this case, the application implements logic that selects the appropriate connection based on its needs.</p><h3>Row Updates and Deletions</h3><p>One of the things you need to know when working with PostgreSQL is how updates and deletions of rows work in this database. When an update occurs, a new row is created. In both cases (an update and a deletion), the old row technically remains in the database and is only marked as deleted until a vacuum comes and frees up space.</p><p>However, there is an important nuance — the vacuum <strong>will only deal with rows</strong> that no one will definitely access anymore, meaning: if there is a transaction in the database that started before the row was deleted, the space will not be reclaimed. Moreover, freeing up space in most cases means that the database will be able to use the freed space for new data in the same table, rather than returning it to the operating system.</p><p>From this description, a fairly obvious problem arises: tables that are frequently updated (as well as indexes in such tables) can “bloat” because holes form in them. This is generally not a developer’s problem, as an experienced DBA will fine-tune the vacuum settings on the database side, but this can only be mitigated to a certain extent.</p><h3>Potential Problems and How to Fix Them</h3><p>So, my favorite use case related to this feature is creating a queue based on PostgreSQL. In this instance, a table is created with the tentative name “event,” into which one daemon adds events, and another daemon retrieves and then deletes them. Some logic with stored procedures or some rationale for deferred task execution may be added here, but the essence remains the same — a queue is a queue, and on any heavily loaded database, it will inevitably cause issues. At one point, the table will grow larger than usual, start lagging on reads, and as a result, will continue to grow. Moreover, such tables do not tolerate long transactions in the database (as a reminder, long transactions interfere with the operation of the vacuum). How to deal with this?</p><p>1) Don’t use a table as a queue. If it is necessary, look into pgq — an extension that implements a queue based on PostgreSQL. We use it quite successfully in our company, and it causes us far fewer problems. However, I should note that very long transactions (in our experience, several hours) are also poorly tolerated.</p><p>2) Don’t allow long transactions on the master. If you have such transactions in your application, move them to a replica and optimize them. Ask your DB administrator to set up the termination of long transactions. This is a less painful situation that doesn’t lead to fatal degradation but still worsens the life of the database. Imagine that you need to update a large number of rows in the database once, for example, you added a new field and want to fill it for all rows. After the update, you may be surprised to find that your table now occupies twice as much space. Such migrations should be carried out in batches because: a) I have just warned you about long transactions; b) you need to take sufficient breaks between batches at a certain frequency to allow the vacuum to operate in the database (you can even start it yourself).</p><p>An even more common scenario — there is a table with historical data in the database, and developers sometimes delete old data from this table. But, as you’ve already guessed, the space is not freed. Such tables are best partitioned by date. Then you can simply delete old partitions when you realize that you no longer need them.</p><p>In reality, the vacuum is not only needed to free up space from deleted rows. To ensure data versioning in PostgreSQL, the row also stores the transaction ID that created it and the one that deleted it. Consequently, the row will be visible to transactions that fall between these values. This ID consists of 4 bytes, meaning its size is limited, so it’s cyclical. For proper functioning, rows that are currently visible to all transactions (i.e., unchanged) are frozen by the vacuum. However, if the row isn’t frozen and the difference between the creation ID and the current transaction is less than half of the cycle (which is 2 billion entries), the database stops writing. 2 billion isn’t a small number, but it’s attainable.</p><p>In our experience, this happened on a test database. One Friday, a new version of the application was deployed to the test environment, which generated a huge load on the databases — 30k transactions per second. The auto-vacuum, unfortunately, couldn’t cope because the load on the database had already increased, and the hardware was weaker than in production. So in less than 20 hours, the database switched to read-only mode. In our case, this load occurred due to an application error. The main advice here is — don’t deploy on Fridays. The wraparound counter can also be monitored for each table, but in regular situations, we haven’t encountered this issue again.</p><h3>The Change of a Schema and Vacuum</h3><p>Vacuum does not write lock the table, so it can work smoothly in the background. However, in some cases, the vacuum can block changes to the table schema. And if, at that moment, you want to, for example, add a new column to the table or even create a partition, the schema modification procedure will wait for the vacuum to finish. On the other hand, schema modification locks table writes, so your schema modification operation, which is waiting for the vacuum to finish, will not allow writing operations on that table. This can be easily prevented:</p><ul><li>You can create a stored procedure that kills vacuum processes, which you will call before starting migrations.</li><li>Before making schema changes, you can set a timeout for query execution, after which it will be canceled. This is a generally good practice because migration procedures can take a long time due to various reasons, which can be a surprise for you.</li></ul><p>Developers often use replicas for read operations to reduce the load on the master. There can be unexpected nuances here too.</p><p>If you use long queries on a replica, the database server will pause replication and not apply new changes during this time. To resolve this, there’s a setting in postgres that interrupts queries longer than a specified duration. By default, it’s set to 30 seconds, which sometimes becomes a problem for developers who plan to delegate heavy and lengthy queries to replicas. You can increase this delay, but it means your replica will fall behind more. You can also pass information about the currently executing queries to the master; this will prevent canceling replica queries, but it means that such queries will affect the master’s vacuum execution and may lead to database bloat, as we already mentioned. In any case, it’s a trade-off, and you need to be aware of this feature and choose the most appropriate solution in each case.</p><p>Once, we encountered an interesting problem — our monitoring started to fail on one of our databases. It turned out that the script collecting information from the DB began timing out, and the timeout happened when gathering data from pg_stat_statements. Pg_stat_statements is a useful extension that allows us to analyze database queries. Semantically identical queries in this table are aggregated, allowing us to obtain statistics on queries even if they were invoked with different parameters. In our case, the application had massive insert queries, inserting up to 50,000 rows, each with 10 values. The resulting query size was more than 12 megabytes. The situation was further complicated because there weren’t exactly 50,000 rows, but slightly fewer (some were filtered by the application), and each time the number varied. As a result, pg_stat_statements couldn’t recognize these queries as similar, and a separate record was created for each of these queries.</p><p>When querying for statistics, the intermediate query result started weighing over a gigabyte, no longer fit into memory, and was written to disk. Queries utilizing temporary files take significantly longer, so we couldn’t fit within the allotted time for statistics collection. Initially, we tried to reset the pg_stat_statements statistics once a day, but that no longer worked. So, we consulted with our team to reduce the size of such queries (unfortunately, it would require significant refactoring of the application logic to make the queries identical). On the database side, we reduced the number of records stored in the statistics.</p><h3>Conclusions</h3><p>If you work with databases, it is essential to understand what happens behind the SQL queries you execute for your work to be maximally efficient. I hope this article helped shed light on some peculiarities. Nevertheless, I advise you, when adopting new patterns in database work, to study the possible impact on the database. I will be glad to answer all your questions or just discuss the article in the comments. Thank you!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=62c5a4224627" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/features-of-working-with-postgresql-62c5a4224627">Features of Working with Postgresql</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How design systems are created: sharing our own example. Part 2]]></title>
            <link>https://medium.com/quadcode-life/how-design-systems-are-created-sharing-our-own-example-part-2-df42f786ec1a?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/df42f786ec1a</guid>
            <category><![CDATA[product]]></category>
            <category><![CDATA[ui]]></category>
            <category><![CDATA[design-systems]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[typescript]]></category>
            <dc:creator><![CDATA[Dmitrii Pashkevich]]></dc:creator>
            <pubDate>Fri, 28 Jul 2023 16:45:41 GMT</pubDate>
            <atom:updated>2023-07-28T16:48:10.974Z</atom:updated>
            <content:encoded><![CDATA[<p>Hey, everyone! This is Dmitry Pashkevich, and I am a frontend developer at Quadcode, specializing in the creation and development of design systems. This is the second part of a comprehensive article, “How Design Systems Are Created.” Last time,<a href="https://medium.com/quadcode-life/how-design-systems-are-created-sharing-our-own-example-part-1-82f58c6f3719"> <strong>we discussed the basics of design systems</strong></a><strong> </strong>— problems that prompt a company to create design systems, phases of forming design systems, and general design system development. Today, I’m going to delve into our own experience, explain how to define product (and design system) requirements and much more. Let’s dive in!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*_fADNxwcrDq5rz4A" /></figure><h3>Defining Product Requirements and, as a result, Design System Requirements</h3><p>The process of defining product requirements began with the frontend (FE) team choosing suitable tools to implement the entire project, not just the design system, based on minimal requirements:</p><ul><li>React:</li></ul><p>a) Server-Side Rendering (SSR) for the main page,</p><p>b) Client-Side Rendering (CSR) for the user dashboard.</p><ul><li>TypeScript.</li><li>Work with REST/GraphQL/WebSockets.</li><li>SSR support.</li><li>React components should be covered by tests.</li><li>Usage of modern transpilation tooling — Babel/Esbuild, etc.</li><li>Enforced linting and code style (code analyzer).</li><li>Defined project folder structure and agreement on code organization.</li><li>Optional: use a UI kit and reuse it for components.</li><li>Optional: ability to view the project’s UI kit and provide it to designers.</li></ul><p>As a result of this selection, the following technologies and tools were chosen:</p><ol><li>React + TypeScript</li><li><a href="https://gist.github.com/dipiash/a5e78b20af18849fdff888c5820c44f7">Eslint</a> with a set of preferred rules</li><li>GraphQL — Apollo</li><li>TanStack Query</li><li>Schema-first approach for API development, generating TypeScript types from Swagger schemas</li><li>Styled-components</li><li>Vite as the project bundler</li><li>Storybook for component display</li><li>npm as the package manager (Yarn/pnpm/npm were considered)</li><li>Monorepo using NX</li><li>Feature-Sliced Design for code architecture and structure</li><li>Testing: unit tests, Jest (later transitioned to Vitest), RTL/snapshot testing</li><li>Monitoring with Sentry</li></ol><p>At this stage, we also considered the UI kit and how we would build it:</p><ul><li>Possible use of react-hook-form for working with forms</li><li>Using styled-system.com for building the UI, building from scratch, or exploring alternative options</li></ul><p>The question of building the UI kit sparked heated discussions and extensive research. I had no desire to start from scratch and go through the process of creating a new set of components again (which I had already experienced and took a long time), while also encountering the same pitfalls. Additionally, it was clear at that time that building the UI kit would not take up much time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*swv4NSj1AGLiRngV" /></figure><p>After some more time spent on research and intense discussions, a solution emerged. We chose Mantine, although we considered other options such as Ant Design, Chakra UI, and Semantic UI. Mantine won our hearts with its project documentation and more. In the next section, I will provide more details on why we were so impressed with this tool. This choice became the starting point for working on the design system, and we already had a sense that it would be an entirely new experience.</p><h3>Results of Working with Requirements or Why Mantine?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UP_v5_I-WMDUTtBh" /></figure><p>The result of the research was a rough draft of the application skeleton. You can check out an example on Github. The final solution differs from what is presented in the repository, but the concept is reflected. After creating the skeleton, we started working on the design system.</p><p>The development of the design system began by identifying the following basic elements:</p><p>- Color palettes</p><p>- Typography</p><p>- Margins/Padding</p><p>- Icons/Images</p><p>- Borders</p><p>- Radii (rounded corners)</p><p>- Patterns/UX/UI flows</p><p>- Animations</p><p>Based on this, we initially set up a configuration for theming (and now we are gradually building the design tokens system). As mentioned earlier, our design system is built using Mantine. So, what made it so useful? On the one hand, it might seem limiting because we are relying on a specific tool. But on the other hand:</p><h4>Mantine is a clear and flexible tool</h4><p>It provides:</p><ul><li>Documentation;</li><li>Theming capabilities;</li><li>Typing;</li><li>Source code that can be used as supplementary/training/inspirational material;</li><li>A broad set of basic, easily customizable components, most of which have been tested in projects and are supported by the community, indicating that they cover many typical use cases and have resolved enough bugs to be used in production;</li><li>Components are broken down into layers, with almost every layer being customizable and stylizable. Again, if you need to do something specific, you can always take the source code of the basic elements and customize it or create a pull request;</li></ul><p>In essence, for us, Mantine serves as the foundation for building our “home” = the design system.</p><h4>Speed of development</h4><p>Comparing the efforts required to create components from scratch versus using Mantine, it looks like this:</p><p><strong>From scratch (under limited time constraints):</strong></p><p>1. You come up with the logic of how the component will work in different states (default/active/focus/disabled, etc.) and then you program the logic/styles.</p><p>2. You think about how it can be styled externally (usually, people don’t think/suspect this at the start), and then there are slight differences here and there — no one breaks down the component into layers (each layer/element of the layer should be stylized).</p><p>3. There are also different variations of the component, and you have to implement logic and styles for them too.</p><p>a) Due to time constraints, it might end up being not very convenient to use and modify the API.</p><p><strong>If we use Mantine:</strong></p><p>You have a ready-made basic component, organized into layers with predefined states (default/active/focus/disabled, etc.) and behavior logic. Essentially, this is your limitation and safety in component creation.</p><p>You simply style the component according to your theme. Styling variations involve describing different styles for display based on the known classes for the layers.</p><p>If there is a need for some custom logic, it can be easily implemented through the standard component API or by modifying the source code, but within your own project.</p><p>I believe it is essential to note that at the product launch, sometimes it is necessary to simplify the interface and make reasonable assumptions because it is not always possible to achieve everything we want right away. Striking a balance is crucial.</p><blockquote><strong>According to some estimates, creating the final component based on Mantine is approximately 5–10 times faster than building it from scratch</strong>.</blockquote><p>This is achieved because:</p><p>- The component’s parameters and layers are immediately clear.</p><p>- You can interactively view the component in the documentation, which gives you an understanding of its capabilities and limitations, along with easily readable source code.</p><p>- You can start using the component immediately without having to come up with its logic from scratch, by using the offered API, which covers around 80–90% of the logic needs for displaying the component, and in most cases, even 100%.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vQSEPGITpVW-9Qu3" /></figure><h4>Layered API:</h4><p>Let’s consider layers using the example of a simple button. On the surface, it may not seem complicated, but here’s how it looks in <a href="https://mantine.dev/core/button/?t=styles-api">layers</a>:</p><p>- Root</p><p>- Inner</p><p>- Icon + Left Icon</p><p>- Label</p><p>- Loader</p><p>- Icon + Right Icon</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JJgfFF7dAp5jwcvX" /></figure><p>Or, for a more complex component like Select, it can be broken down into layers with the ability to modify the rendering of the dropdown list, etc.</p><p>You can create your custom components according to the following rules:</p><ul><li><a href="https://mantine.dev/guides/custom-components/#add-styles-api-support">Styles API</a></li><li><a href="https://mantine.dev/guides/custom-components/#add-default-props-support">Default props</a> &amp; <a href="https://mantine.dev/guides/custom-components/#defaultprops-type">types</a>.</li></ul><p>Examples of the code can be viewed <a href="https://gist.github.com/dipiash/a2798b48c621439e023e6f9986aca1d4">here</a>.</p><h4>The impact of Mantine version updates</h4><p>The skeleton of our project and part of the setup were created using Mantine version 4. We went through the update to version 5 relatively smoothly, and soon we will have to update to version 6. It might be a bit more challenging, but the team has provided a detailed <a href="https://mantine.dev/changelog/6-0-0/">CHANGELOG</a> to help with the process.</p><h4>Benefits</h4><blockquote><strong>Mantine is about achieving more results with less effort.</strong></blockquote><p>When starting to work on a design system, a lot of time is always invested in basic components, no matter how much we want to avoid it. However, using auxiliary tools significantly eases the work for both developers and designers.</p><p>Our very intense development and design phase lasted about one quarter. During this time, 96% of the components were ready, the framework of the application was set up with various providers (authorization, translations, etc.), and several sections related to authentication and onboarding were implemented.</p><p>The phase of active synchronization took another quarter. A few new components were introduced (no more than 10), and most of the time was spent on assembling pages.</p><p>Currently, we are in the third quarter of development, and I believe we have reached a plateau — there are no major changes happening. However, soon we will begin moving towards design tokens and scaling the design system to other team projects. I think we will go through all three phases of design system development to some extent again.</p><h3>Summarizing the Definition of a Design System</h3><p>Let me provide a brief summary and definition of what a design system is.</p><p>I’ll start by saying that in my understanding, the ideal start of working on a design system is when passionate individuals from both the front-end development and design sides come together. These individuals are enthusiastic and willing to put in the effort, even staying up late at night to discuss and improve their creation. And there’s a little time to start.</p><p><strong>Developing a design system is about:</strong></p><ul><li>Having a single source of truth for UI/UX and interface text.</li><li>Flexibility, standardization, and simplicity.</li><li>Thoughtfulness, efficiency, and transparency.</li><li>Change safety, consistency, and inheritability.</li><li>Automation (not necessarily at the beginning).</li><li>Accessibility.</li><li>Constant communication.</li></ul><p><strong><em>A design system is a set of rules for styling products that helps maintain the integrity of the user experience and optimize resources expended on development and design. </em></strong>It involves a high degree of autonomy through a unified design, component library, guidelines, and established development approaches. This, in turn, allows businesses to simply articulate the problem and what they want to change, while the design system with all its processes and approaches begins to help solve the identified issues.</p><p>Ultimately, a design system encompasses:</p><ul><li>Communication rules.</li><li>Visual language.</li><li>Documentation.</li><li>Component library.</li></ul><p>I hope this provides an understanding that a UI kit is just a component library, and without all the other elements, it cannot be called a design system.</p><h3>What a Design System has brought and who it benefits</h3><p>A design system is not a simple and cheap tool that can be implemented and used casually. A team or company should reach a certain point where the use of this approach is justified.</p><p>Let’s take a look at what we ultimately achieved after 3 quarters of working on the design system and product.</p><ul><li><strong>Each team member found value in the design system: </strong>costs and feature release speed: By reusing well-designed components, more resources are available for creating new/non-standard features in the product. After laying the foundation, we became faster in designing and assembling application sections.</li><li><strong>Reduction in design time:</strong> Using the design system allows designers to reduce the time spent on design and development. This is possible because a significant portion of design elements and components are already ready and can be reused.</li><li><strong>Unified design and components for development: </strong>We hardly write styles within the project, thanks to the unified design and components. The proportion of styling files is less than 4%, which speeds up development and the release of new pages exponentially (time to market).</li><li><strong>User interface development is like building with blocks:</strong> We spend more time on the application’s logic rather than its layout.</li><li><strong>Single source of truth: </strong>Since the design system is the source of truth, there are no discrepancies in interface-related decisions. Any team member can review the design and implementation to ensure compliance with the layout and rules. This helps maintain the quality of released solutions.</li><li><strong>Convenient testing setup:</strong> We have design reviews followed by task/feature testing. If there are changes, they are implemented both in Figma and Storybook. For example, our QA team often uses Figma and consults Storybook to review component properties and capabilities. When working with content and editors, we know that most of the product’s textual content is reflected in the design. The text is pulled from a specialized service.</li><li><strong>Product quality:</strong> Using the design system allows us to create higher quality products. Strict styling rules, a unified design, and the reuse of elements ensure that the product meets quality standards.</li><li><strong>Developers propose solutions before design:</strong> Developers can propose solutions for certain tasks based on established approaches and coordinate them with designers (if the design is not ready yet).</li><li><strong>Onboarding new employees:</strong> During onboarding, new employees gain a clear understanding of the capabilities of the components used in the project. They can view interactive examples of their work and read the documentation.</li><li><strong>Unified solution across different projects: </strong>There is no need to adapt to a new interface workflow.</li><li><strong>Using a design system helps create a unified design</strong>, which enhances the user’s perception of the product. Strict styling rules and the use of a consistent style throughout the project ensure a uniform user interface and make the project more recognizable.</li><li><strong>Fast adaptation to changes</strong>: This is more relevant for design, as development becomes more laborious when the API of a component is not compatible with the previous version. The design system allows for quick adaptation to project changes. If changes are made to a component, they automatically apply wherever it is used, which accelerates the adaptation process.</li><li><strong>No bloating of the team’s technical stack:</strong> Focus remains on implementing our own UI kit. The product is executed in a unified style, despite consisting of multiple services. There is no need to learn a new framework/version of another library.</li></ul><p>The main benefits we gained as a team have been listed above. However, there are certain aspects worth considering, which are not necessarily drawbacks but rather peculiarities:</p><blockquote>Synchronizing component changes in Figma and code can be challenging.</blockquote><p>Starting a product is more challenging as components need to be developed first before the business begins receiving fully functional application pages incrementally. However, this is true for the first two or three iterations of implementing the UI kit in new projects, as it requires resolving related discrepancies during integration: settings, styling, documentation, versioning, etc. Overall, it is a solvable problem, and with each new project, the startup process accelerates significantly.</p><p>Maintaining components as a library is not always easy: versioning and applying new versions to all projects, testing them, etc., can consume a considerable amount of resources. This is especially true at the beginning when requirements can change frequently.</p><h3>Plans for Design System Development</h3><p>What’s next? We want to:</p><ul><li>Separate components into a separate package:</li><li>Implement versioning in code.</li><li>Implement versioning in design.</li><li>Start implementing the design system in other team projects:</li><li>Address and resolve related issues.</li><li>Develop new components and improve existing ones.</li><li>Upgrade to version 6 of Mantine.</li><li>Implement design tokens for key components.</li><li>Implement color scheme management.</li><li>Describe standards for component creation and their entire lifecycle.</li><li>Start describing contexts and rules for component usage.</li></ul><p>At the time of article publication, some of these tasks will already be completed.</p><h3>Code Examples</h3><ul><li><a href="https://gist.github.com/dipiash/a5e78b20af18849fdff888c5820c44f7">ESLint configuration</a></li><li><a href="https://gist.github.com/dipiash/a2798b48c621439e023e6f9986aca1d4">Styling of the Stats component</a></li><li><a href="https://github.com/dipiash/nx-ts-vite-react-graphql-styled-monorepo-example">Basic project skeleton</a> (the final solution differs from what is presented in the repository, but the concept is reflected).</li></ul><p>This is it! I hope you’ve enjoyed this article. Let me know what you think about it in the comments. I’ll be glad to answer any of your questions. See you!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=df42f786ec1a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/how-design-systems-are-created-sharing-our-own-example-part-2-df42f786ec1a">How design systems are created: sharing our own example. Part 2</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How design systems are created: sharing our own example. Part 1.]]></title>
            <link>https://medium.com/quadcode-life/how-design-systems-are-created-sharing-our-own-example-part-1-82f58c6f3719?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/82f58c6f3719</guid>
            <category><![CDATA[design-systems]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[react]]></category>
            <category><![CDATA[front-end-development]]></category>
            <category><![CDATA[mantine]]></category>
            <dc:creator><![CDATA[Dmitrii Pashkevich]]></dc:creator>
            <pubDate>Thu, 27 Jul 2023 17:33:13 GMT</pubDate>
            <atom:updated>2023-08-01T09:19:10.993Z</atom:updated>
            <content:encoded><![CDATA[<h2>How design systems are created: sharing our own example. Part 1</h2><p>Hello everyone. My name is Dmitry Pashkevich, and I am a frontend developer at Quadcode, specializing in the creation and development of design systems. This article is intended for specialists at various levels who are involved in design systems: from consumers and component developers to team/tech leads building design systems from scratch. Here, I will share my experience, and the path I have taken from creating UI-kits to fully building design systems, and show the benefits that a ready-made design system has brought us. If you are looking to deepen your understanding of design systems, then this article is for you. Have a nice reading!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*9H8Fwp3vvEigMDJ0" /></figure><h3>Intro</h3><p>To some extent, this combination of words has become a collective term because experience shows that different companies have different interpretations of it. Understanding depends on the maturity level of the team/project/company/processes, the presence of desire, time, and resources, and most importantly, people with “passionate” eyes who will develop this direction despite all the difficulties.</p><p>I have gone through the path of creating various UI kits, transforming these UI kits into design systems, to fully building design systems from scratch from the perspective of FE development in the following roles:</p><ol><li>Consumer of components.</li><li>Developer of components.</li><li>Lead in building a design system from scratch. Here, 80–90% of the time was spent on component development, while the remaining 10–20% involved applying components in the product code, as well as sharing knowledge and interacting with the FE development team.</li><li>Product developer — when you “build components” and immediately implement them during product development. In this case, the entire team participates in working on the design system.</li></ol><p>In this article, I want to share the experience we have gained and try to structure it using our company as an example. I will demonstrate the phases of formation and development of the design system that we went through, share the technical stack of our product, and provide insights into the benefits the ready-made design system has brought us.</p><p>On one hand, I would like to immediately provide a definition of what a design system is in my understanding, but let’s approach it slightly differently and formulate what comes after the main part.</p><p><em>&lt;</em><strong><em>SPOILER&gt;</em></strong><em> For those who are eager to learn the differences between a design system and a UI kit, you can skip ahead to the second part of my article and scroll straight to the section“What is a design system, in summary.” However, I still recommend reading from beginning to end. &lt;/</em><strong><em>SPOILER&gt;</em></strong></p><h3>The problems that lead to the emergence of design systems</h3><p>Firstly, I will start with what usually initiates the birth of a design system. Most often, the reason lies in accumulated and recognized problems that the whole team or specific parts of it begin to address — the areas where the pain is the greatest.</p><p>So, what can these problems be?</p><ol><li>The old design is lost. It is no longer supported, or it exists only in images (yes, it happens). In this case, developers themselves or with the help of a designer come up with ways to incorporate specific elements. The consequence is that the development of new pages becomes a pain. The product inevitably becomes inconsistent across different pages because the logic of interface construction is lost.</li><li>The old design/interface has become morally outdated, and a redesign is necessary. The question arises: what should be done? Should the current product be modified: modularly, page by page, or should it be built from scratch, etc. — each person chooses their own path. Different paths have nuances that need to be dealt with.</li><li>Communications regarding feature creation/interface changes exceed reasonable limits. In this case, creating a new element or UX approach becomes disconnected from the product. I think this can be considered as an extension of point 2 and the team’s desire to provide users with a more convenient and modern product.</li><li>Creating any new feature (even a similar one) takes a lot of time. Various factors can contribute to this, such as poor architecture inherited from the MVP project, outdated tools, or the inability to simply update.</li><li>The necessity to improve processes. The desire to create a unified point of communication within the team, at least between development and design. A little spoiler: as practice shows, during development, this point, also known as the design system, becomes at least a single source of truth, and discussions constantly arise there.</li><li>The presence of significant routine work when creating new layouts or coding within a single product. Sometimes, there is no opportunity to reuse a UI kit in design or code. This situation often arises across different projects and occasionally within a single project (more relevant to design in a specific tool). It looks something like this: you open a new design layout, see similar elements but with different names. It becomes difficult for a developer to match elements with the code, and there is a need to standardize everything to avoid wasting time on unnecessary clarification when it’s simply a matter of taking, implementing, and using.</li><li>There is a product pool within the company, and at some point, there is a need to standardize the UI/UX. Each product has its own team of designers and developers. Does each team need to develop its own UI kit? It happens sometimes. Then the implementation of a design system can start from the design side, which brings everything to a unified appearance. As a result, multiple UI kits may appear in the code due to resistance in different teams, but after a couple of iterations, a unified component base begins to emerge.</li></ol><p>These described problems are not listed in any specific order or importance; any of them can be a starting point for creating a design system.</p><h3>Phases of forming a design system</h3><p>After realizing some of the above, the process usually begins to create something similar to a UI kit or a large-scale design system. It unfolds as follows:</p><ol><li>Human and time resources are allocated.</li><li>Requirements are formed for a technology stack that will meet modern challenges.</li><li>Based on the requirements, a stack is chosen. The first demo versions of project skeletons appear.</li><li>Initial agreements between development and design teams emerge — the search for a common language, the desire to synchronize.</li><li>The core of the design system is born, or it’s more accurate to say, the UI kit, as the processes are yet to be fully formed.</li><li><strong>The first components are created.</strong></li><li>The components are tested in the first product feature, and adjustments are made to them.</li><li>Organic development of components in design and code occurs, with or without design reviews.</li><li>There is more time for reflection on what has been done and what currently needs adjustment. The development and design teams align their approaches and toolsets to help move forward, including:</li></ol><ul><li>Identifying and resolving issues.</li><li>Introducing new processes (syncs — meetings on how designers create a design, frontend developers work on frontend, QA tests the frontend part of the application, and how everyone interacts with design layouts.</li><li>Implementing new tools.</li></ul><p>10. Gradually, there comes a phase where the design system team or individuals responsible for the design system focus less on creating new components and more on assembling pages and maintaining the components.</p><p>In this stage of my personal experience related to design system development, I consider this an ideal path to follow. However, there are always nuances, and it depends on the situation.</p><h3>Phases of design system development</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wsJ69YzkSREGxMIv" /></figure><p>If we delve deeper into this list, the following phases of development can be identified:</p><p><strong>Hot phase</strong>: This is when components undergo the maximum number of changes in design/code during their implementation into the final product. This phase can last from a few months (on average, 6) to several years. The minimum duration I have experienced is 3 months, and the maximum is 1.5 years. It depends on the initial agreements, the experience within the development team and the business that allocates resources, time, and sets priorities.</p><p>The development of components in both code and design occurs simultaneously. There is a connection between development and design, establishing a common language, component naming conventions, and determining the best way to organize the library. Efforts are made to ensure that everything in the design references basic components or that more complex components are built from basic ones.</p><p>Many growth points are identified but may not be addressed due to resource limitations or a lack of critical necessity at this stage. These issues are addressed when time becomes available or when it becomes difficult to move forward without resolving them.</p><p>The improvement of design system workflows for design and frontend development occurs in parallel. They improve their own processes, and gradually, they start to solve their respective issues.</p><p><strong>Sync phase: </strong>Shared sync meetings are introduced, such as weekly or sprint meetings, where design and development discuss problems, questions, and suggestions, and agree on small steps to improve their collaboration.</p><p>Sometimes, the sync phase can reach a deadlock, where the team starts getting stuck in their own perspectives and needs to share their problems with a third party who is not as deeply immersed in the context. This person can help find solutions more quickly.</p><p>Deficiencies in processes related to design review, such as how it is conducted from the design side, what and how they review, are addressed. Critical design changes and priorities are determined. Misalignment between design layouts and compressed release timelines are addressed. Understanding how design processes translate into components in Figma, or vice versa when a design doesn’t know how components are transformed into code, is also resolved.</p><p><strong>Plateau phase:</strong> Creation of new pages/features/adjustments does not require major rework of key design system components or only involves minimal changes. Major changes in component behavior are possible. For example, rethinking UI/UX when something is found to be inconvenient and needs to be redesigned or when new components with different behaviors need to be introduced. New components are occasionally created.</p><p>During the redesign of our product, we went through all these stages of realization and acceptance at a relatively fast pace: we built a working product, not just components, within six months.</p><p>I’ll note that all three phases can repeat from time to time depending on the required changes. For example, when separating components into a separate library, adding theme management, establishing stricter component creation standards, or implementing them in a new project.</p><p>In the<a href="https://medium.com/quadcode-life/how-design-systems-are-created-sharing-our-own-example-part-2-df42f786ec1a"> second part of this monumental article</a>, I tell about how to define product requirements and, as a result, design system requirements and also explain why’ve chosen Mantine as a foundation for our own design system. Subscribe and stay tuned! I would be glad to answer any questions in the comments.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=82f58c6f3719" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/how-design-systems-are-created-sharing-our-own-example-part-1-82f58c6f3719">How design systems are created: sharing our own example. Part 1.</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Handle Errors in Golang — Sharing Our Own Use Case]]></title>
            <link>https://medium.com/quadcode-life/how-to-handle-errors-in-golang-sharing-our-own-use-case-571a44d91b4c?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/571a44d91b4c</guid>
            <category><![CDATA[quadocde]]></category>
            <category><![CDATA[golang-development]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[golang-tutorial]]></category>
            <dc:creator><![CDATA[Aleksei Burmistrov]]></dc:creator>
            <pubDate>Tue, 04 Jul 2023 11:45:20 GMT</pubDate>
            <atom:updated>2023-07-04T11:45:20.770Z</atom:updated>
            <content:encoded><![CDATA[<h3>How to Handle Errors in Golang — Sharing Our Own Use Case</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eXcSQxdII5eycCZkU62cfw.png" /></figure><p>Hello everyone. My name is Aleksei Burmistrov, and I’m a senior Golang developer at Quadcode. During the billing development, we encountered various types of errors that can occur during program execution. In this article, I want to share our experience in structuring and handling these errors, as well as present the approaches we applied for their efficient handling and diagnostics. Our main goal is to create understandable and easily manageable errors that guarantee reliable billing system operation.</p><p>Errors are one of the most important aspects of any programming language. How errors are handled affects applications in many ways. The way errors are defined in Golang differs slightly from languages like Java, Python, and JavaScript. In Go, errors are values.</p><h3>Custom Error Type</h3><p>All error-related code is placed at the root of our project. This is done to prevent conflicts with the standard errors package. Such an approach makes error handling in the application more explicit and allows us to use the standard library without applying aliases.</p><p>The foundation of it all is the Error type, which is a specific representation of an error that implements the standard error interface. It has several fields, some of which may not be set:</p><pre>type errorCode string<br><br>// Application error codes.<br>const (<br>    ENOTFOUND errorCode = &quot;not_found&quot;<br>    EINTERNAL errorCode = &quot;internal&quot;<br>    // Add other error codes here<br>)<br><br>type Error struct {<br>    // Nested error<br>    Err error `json:&quot;err&quot;`<br>    // Additional error context<br>    Fields map[string]interface{}<br>    // Error code<br>    Code errorCode `json:&quot;code&quot;`<br>    // User-friendly error message<br>    Message string `json:&quot;message&quot;`<br>    // Executed operation<br>    Op string `json:&quot;op&quot;`<br>}</pre><ul><li>Op represents the operation being performed. It is a string that contains the name of a method or function, such as repo.User, convert, Auth.Login, and so on.</li><li>Message contains the error message or translation key that can be shown to the user.</li><li>Code is a specific error type. It provides standardization and unambiguous error handling, allowing easy identification and classification of errors. The list of possible codes can be extended as needed, providing flexibility and the ability to add new error types as the application evolves. It also allows us to accurately define API response statuses associated with each error type.</li><li>Fields represents data related to the error. This data can include identifiers, request parameters, or any other information that may be useful for understanding the cause of the error.</li><li>Err contains a nested error, which may be associated with the current error. It can be an error returned by an external library or our own Error. Having a nested error is useful for tracking error chains and building a trace, which we will discuss later.</li></ul><h3>Creating an Error</h3><p>To create an error, we decided not to have a separate constructor since the structure is not too complex. To ensure that developers don’t make mistakes when creating an error (e.g., forgetting &amp; or creating an error without Err and Message), we use our own linter for golangci-lint.</p><p>Let’s consider an example. In normal usage, we may return an error multiple times within a method. To handle this, we define a constant, conventionally called op, which will be passed to all errors in the method:</p><pre>func (r *userRepository) User(ctx context.Context, id int) (*User, error) {<br>    const op = &quot;userRepository.User&quot;<br>    ...<br>}</pre><p>If we only need to add op to the error for passing it to the higher level, we can use the helper functions OpError or OpErrorOrNil.</p><pre>...<br>var user User<br>err := db.QueryRow(ctx, query, id)<br>if err != nil {<br>    if errors.Is(err, pgx.ErrNoRows) {<br>        return nil, &amp;app.Error{Op: op, Code: app.ENOTFOUND, Message: &quot;user not found&quot;}<br>    }<br>    return app.OpError(op, err)<br>}<br>...</pre><h3>Error Handling</h3><p>The advantage of using our custom error type is the ease with which we could write error-dependent tests and write error-sensitive code outside of tests.</p><p>To check the Code, there is a helper function called ErrorCode that returns the error code if it is an application error, or EINTERNAL if it is something else.</p><pre>switch ErrorCode(err) {<br>    case ENOTFOUND:<br>    ...<br>    case EINTERNAL:<br>    ...<br>}</pre><p>If we need full access to the Error struct, we can use the standard library errors.</p><pre>appErr := &amp;Error{}<br>if errors.As(err, appErr) {<br>  ...<br>}</pre><p>Using the Code field allows for clear conversion of errors to HTTP statuses. To achieve this, we can create a map where the keys are Code values, and the values are the corresponding HTTP statuses.</p><p>Example of converting an error to an HTTP status:</p><pre>var codeToHTTPStatusMap = map[errorCode]int{<br>    ENOTFOUND: http.StatusNotFound,<br>    EINTERNAL: http.StatusInternalServerError,<br>    // Other mappings of error codes and HTTP statuses<br>}<br><br>func ErrCodeToHTTPStatus(err error) int {<br>    code := ErrorCode(err)<br>    if v, ok := codeToHTTPStatusMap[code]; ok {<br>        return v<br>    }<br><br>    // Return the default HTTP status for unknown errors<br>    return http.StatusInternalServerError<br>}</pre><p>Now, to get the corresponding HTTP status for an error, simply call the ErrCodeToHTTPStatus function and pass the error to it. It will return the appropriate HTTP status. If the error code is not found, the default HTTP status http.StatusInternalServerError will be returned.</p><h3>Analysis and Diagnostics</h3><p>When analyzing errors in our application, we rely on the information we log. However, when errors are logged as strings, it makes it difficult to search and analyze them. To address this, we structure our logs for sending them to Graylog, logging errors as objects that contain the following information:</p><ul><li>code: The error type to understand its nature.</li><li>msg: The message from Error.Error().</li><li>fields: Additional context added to the error.</li><li>trace: The stack trace of operations.</li></ul><p>In our logs, we avoid logging the standard stack trace for application errors because it provides too much information and makes analysis difficult. Here’s an example of a typical stack trace:</p><pre>goroutine 1 [running]:<br>testing.(*InternalExample).processRunResult(0xc000187aa8, {0x0, 0x0}, 0x0?, 0x0, {0x1043760e0, 0x1043b8d88})<br>       /opt/homebrew/Cellar/go/1.19.4/libexec/src/testing/example.go:91 +0x45c<br>testing.runExample.func2()<br>       /opt/homebrew/Cellar/go/1.19.4/libexec/src/testing/run_example.go:59 +0x14c<br>panic({0x1043760e0, 0x1043b8d88})<br>       /opt/homebrew/Cellar/go/1.19.4/libexec/src/runtime/panic.go:890 +0x258<br>app.foo(...)<br>       app/errors_test.go:336<br>app.bar()<br>       app/errors_test.go:341 +0x38<br>app.baz()<br>       app/errors_test.go:345 +0x24<br>app.ExampleTrace()<br>       app/errors_test.go:350 +0x24<br>testing.runExample({{0x1042f8cd5, 0xc}, 0x1043b8528, {0x1042fcab9, 0x19}, 0x0})<br>       /opt/homebrew/Cellar/go/1.19.4/libexec/src/testing/run_example.go:63 +0x2ec<br>testing.runExamples(0xc000187e00, {0x10450e080, 0x1, 0x0?})<br>       /opt/homebrew/Cellar/go/1.19.4/libexec/src/testing/example.go:44 +0x1ec<br>testing.(*M).Run(0xc00014a320)<br>       /opt/homebrew/Cellar/go/1.19.4/libexec/src/testing/testing.go:1728 +0x934<br>main.main()</pre><p>In this case, there is a lot of redundant information that is difficult to analyze. However, the stack trace of operations looks like this:</p><pre>[&quot;ExampleRun&quot;, &quot;baz&quot;, &quot;bar&quot;, &quot;foo&quot;]</pre><p>The operation stack trace is easily readable and contains only the domain logic, which is important for us.</p><p>For logging errors, we use the <a href="http://go.uber.org/zap">go.uber.org/zap</a> package. We have created a helper function called Error(err error) zap.Field, which allows us to easily log errors as objects.</p><p>Here’s an example of using this function:</p><pre>func foo() {<br>    ...<br>    if err != nil {<br>        logger.Error(&quot;something gone wrong&quot;, Error(err))<br>    }<br>}</pre><p>An example of the error logged in the log could look like this:</p><pre>{&quot;level&quot;:&quot;error&quot;,&quot;msg&quot;:&quot;something gone wrong&quot;,&quot;error&quot;:{&quot;msg&quot;:&quot;user not found&quot;,&quot;code&quot;:&quot;not_found&quot;,&quot;trace&quot;:[&quot;userRepository.User&quot;],&quot;fields&quot;:{&quot;user_id&quot;:&quot;65535&quot;}}}</pre><h3>Final Thoughts</h3><p>In Golang, we have complete freedom in choosing how to handle errors in our applications. However, with this freedom comes great responsibility, as proper error handling plays a crucial role in ensuring the reliable operation of an application. It is important to understand that each application has its own specific requirements, and we can modify error handling accordingly based on those requirements and context.</p><p>We have the flexibility to manage the data contained in the Error structure and modify it according to our needs. We can make adjustments, add additional data and functionality to improve error traceability and analysis.</p><p>If you want to see code examples, you can find them at <a href="https://github.com/MrEhbr/app">https://github.com/MrEhbr/app</a>.</p><p>Proper error handling is an important component of application development, and we should constantly strive to improve our approach to error handling and analysis to ensure more reliable and convenient application operation.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=571a44d91b4c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/how-to-handle-errors-in-golang-sharing-our-own-use-case-571a44d91b4c">How to Handle Errors in Golang — Sharing Our Own Use Case</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Core of Personality: Strengths and Weaknesses of Different Types]]></title>
            <link>https://medium.com/quadcode-life/the-core-of-personality-strengths-and-weaknesses-of-different-types-940d69775f09?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/940d69775f09</guid>
            <category><![CDATA[personality-types]]></category>
            <category><![CDATA[communication]]></category>
            <category><![CDATA[human-resources]]></category>
            <category><![CDATA[psychology]]></category>
            <dc:creator><![CDATA[Yana Daimond]]></dc:creator>
            <pubDate>Thu, 16 Mar 2023 08:39:54 GMT</pubDate>
            <atom:updated>2023-03-16T08:41:36.183Z</atom:updated>
            <content:encoded><![CDATA[<p>Hi, Medium! My name is Yana Daimond, and I am an HR Business Partner at Quadcode. Our BP team uses a variety of tools and psychological techniques to build effective and results-oriented communication between employees and departments, which impacts the achievement of business goals.</p><p>One such tool that I will be introducing to you in this article is the classification of personality psychological types, also known as personality cores. I will focus in more detail on one of the psychotypes that has a sensitive or, as it is also called, a depressive core. People of this psychotype often prefer the HR sphere and working with people, for example, in customer service and sales.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*-NRqlzuBkafVftmL" /></figure><p>Let me clarify right away that there are no good or bad psychotypes. Their diversity is created to perform different functions in human population life and to fulfill their unique roles.</p><p>Realizing one’s psychotype opens up the opportunity to more accurately define one’s needs, as well as to shape one’s psychological portrait and understanding of “who I am”. Identifying strengths and weaknesses allows a person to see themselves as clear as day; not to change, not to escape, not to deny their personality, but to accept and develop their potential, activating their development.</p><p>Knowledge of psychotypes helps us to understand other people better, to be more tolerant of their peculiarities and, as a result, to communicate with them more successfully and effectively.</p><h3>What is the Personality Core?</h3><p>The psychotype serves as a unique foundation of personality, its core, representing a set of individual characteristics that are largely formed in childhood.</p><p>People with different psychotypes differ quite sharply from each other: they relate to themselves and to others differently, and each has their own way of speaking. Everyone works, creates and fantasies differently, commits different acts and suffers from different illnesses.</p><p>The core is similar in meaning to temperament, including emotional, behavioral, and rational reactions. For example, one person might be pleasantly surprised by an unexpected gift, while another might be neutral: “Yes, thank you. Wonderful”.</p><p>In addition to the main personality core, there are also additional ones, which are formed because a child is usually raised by more than one person. This is not necessarily a mother and father; it could be a mother and grandmother, father and kindergarten teacher, and other people involved in the child’s life at that time.</p><p>Each core has its own shadow sides. These are the aspects that a person can develop and improve within themselves.</p><h3>Types of Personality Cores</h3><p>A person’s core can be determined in communication. For someone who knows the characteristics of each psychotype, an hour is enough to determine the main core of their interlocutor.</p><p>There are dozens of classifications of personality core types or psychotypes. Different psychological schools use different classifications. For example, Carl Jung identified 8 psychotypes, while Soviet psychiatrist Andrey Lichko identified 13.</p><p>I use a simplified classification in my work, as simplicity and efficiency are important to me. I have identified six main psychotypes for myself:</p><ol><li>Narcissist.</li><li>Hysteroid.</li><li>Schizoid.</li><li>Paranoid.</li><li>Epileptoid.</li><li>Sensitive (depressive) type.</li></ol><p>Let’s examine the characteristics of each psychotype according to this simplified classification: who these people are, what to pay attention to, and what to consider when interacting with them, as well as their strengths and weaknesses.</p><h3>Narcissists</h3><p>Narcissists are charming and charismatic people, self-absorbed, striving for perfection, and possessing a creative personality. They have a need to be the center of attention and painfully perceive negative evaluations from those around them.</p><p>People with a narcissistic core have a high level of communication, sufficient empathy, and can make others fall in love with them. They quickly become excited about ideas, can quickly find support, and inspire others. In business, a narcissistic leader is likely to have an inspired and motivated team.</p><p>Narcissists give beautiful and careful compliments. Moreover, they genuinely believe in what they say. They can notice the beauty not only in themselves but also in the outside world, it is natural for narcissists. Narcissists need to receive approval and confirmation of their successes, and they seek it in other people. It is not enough for a person with a narcissistic psychotype to wake up in the morning and tell themselves, “How wonderful I am”. They need others to do it for them, they need a “mirror” that will respond, “Everything is great with you/you’re doing a great job.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*r3eXIgjzrFtcmr1D" /></figure><p>If you want to achieve something with a person of this psychotype, you need to praise their work and achievements. In turn, narcissists will not say directly, “You did a bad job,” but will use phrases like, “A specialist of your level is capable of more.” This is not a direct reproach, but it can hurt.</p><p>An example of a narcissistic psychotype from literature and cinema is Jay Gatsby from “The Great Gatsby”.</p><h3>Hysteroids</h3><p>Many scientists and practitioners use this concept, not referring to a “hysterical” person, but speaking about a type whose core is focused “inward” and on gaining attention through self-focus. Unlike narcissists, hysteroids are more self-sufficient and stable in terms of self-esteem.</p><p>This type is capable of transforming according to the situation, changing roles and playing them with dignity. Hysteroids are confident in themselves and find it difficult to accept the fact that someone else might also want to be on the “stage” with them. They may dress brightly and unconventionally. The opinions of others don’t bother them much, so they can afford to “put on a show.” If a narcissist opens the door and says, “Good day to everyone, my dears!”, a hysteroid is quite capable of kicking the door with the phrase, “Hey, dudes!”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wxCAt3hOLbPKQMBP" /></figure><p>A hysteroid leader will draw attention to themselves. Therefore, people with this psychotype are often lone heroes. The shadow side of hysteroids is that they can be insensitive. They may express themselves loudly, shout, interrupt. For them, it’s a kind of game, their image. At meetings, you can spot hysteroids by the fact that they “take” the microphone and talk about themselves for a relatively long time.</p><p>An example of this psychological type in cinema is Ruby Rhod from “The Fifth Element”.</p><h3>Schizoids</h3><p>The schizoid psychotype is about the deep inner world, mental processes, logical chains, high concentration, focus, and a new perspective on the world at a super angle, which can be extremely inconvenient for other types. Again, I emphasize that the name of the psychotype does not mean that a person is schizophrenic or similar.</p><p>Schizoids are about measured pace and deep thoughts. They see details and patterns that others do not notice, ask atypical questions, and help to look beyond the usual worldview.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*egWG4eDFI6cRIqXr" /></figure><p>Schizoids are characterized by introversion. They get tired of people, and can take a long time to respond, not because they don’t want to, but because they can’t find the strength or don’t see the significance in the message. People for them are elements. At the same time, it is not uncommon for a person to have a combination of a hysteroid and a schizoid. They can, for example, give a scientific presentation to a large audience, and then retreat and sit in his cage.</p><p>An example of a schizoid is Sheldon Cooper from the TV series “The Big Bang Theory”.</p><h3>Paranoids</h3><p>A paranoid type is a projectile with great penetration power. Energetic, never doubting their correctness and the incorrectness of all who hesitate.</p><p>In ordinary life, this is just a purposeful and self-confident person who knows what they need. If they have a super idea, they go straight through to it, sweeping away everything in their path and not paying attention to small things, details, and even people.</p><p>It is very difficult to determine paranoids; they are peculiar “chameleons.” They can camouflage while pursuing their goal: not showing themselves at the beginning, and then switch to radical actions.</p><p>Among the pros, these are very powerful people. They can survive and rise from the ashes. They have immense inner confidence. Paranoids create ideas that others will follow. They have charisma, the ability to sell their vision at a rational level, and convey meanings.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JGBGXLDPnOyJ0ekt" /></figure><p>For the sake of the goal, paranoids are ready to step on heads, regardless of the emotions of other people, discussing plans behind others backs. They can make very tough decisions in the name of achieving results: “We need to cut costs. We’re firing 500 people.” — “We only have 50 left…” — “It doesn’t matter. We’re going for the goal.”</p><p>The name of the psychotype itself comes from the constant paranoia of its representatives. They are worried that someone will know more and not share information. Paranoids want to be involved in everything, but at the same time, they will give little in return.</p><p>An example of a paranoid from literature is Dolores Umbridge from “Harry Potter”.</p><h3>Epileptoids</h3><p>Let’s not confuse epileptoid with epileptic. We are discussing all psychotypes within the norm.</p><p>Reasonable, thrifty, sociable. Epileptoid thinking is pragmatic, clear, and understandable to all people. They structure their statements well, breaking them down into simple phrases. They do not overuse introductory sentences and participial phrases, nor do they ponder on high philosophical categories.</p><p>Their logic is consistent and simple. However, an epileptoid, like a paranoid, can twist logic with borrowed arguments.</p><p>Epileptoids are people who follow an idea or authority. They are characterized by a strong attachment: if they believe in someone or something, it will be extremely difficult to persuade them otherwise. They are also very systematic, consistent, and structured. Epileptoids feel comfortable working according to a built process, adhering to regulations. They can deviate from the rules if the leader confirms that it is necessary. In general, epileptoids have the principles of “must’’ and “necessary.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*fxNKGziZ5OwZaBJ8" /></figure><p>People with this core are loyal, devoted, and responsive, but in their work, they have minimal flexibility. Because of this, it can be difficult to communicate with epileptoids. They find it hard to be fast and break their usual routine. However, they are well suited for monotonous work or companies with pronounced bureaucracy.</p><p>In the movies, an example of such a psychotype is Desmond Doss from the film “Hacksaw Ridge”.</p><h3>Sensitive (Depressive) Type</h3><p>Narcissists are interested in beauty, aesthetics, and the sublime. Hysteroids focus on their own “self.” Epileptoids are focused on processes. Schizoids explore how things work. Paranoids concentrate on a certain ideology. Only sensitive types have a significant focus on people. More often, this type serves as an addition to the main personality core.</p><p>This psychotype is characterized by excessive sensitivity, impressionability, high moral demands primarily on oneself, low self-esteem, shyness, and timidity. Under the blows of fate, people with a sensitive core easily become extremely cautious, suspicious, and withdrawn, so they are vigilant and watch the reactions of others. Such people are diligent and devoted. They can show kindness and mutual assistance, be very sociable and communicative. Social recognition is important to them. They have predominantly intellectual and aesthetic interests.</p><p>The sensitive core is more focused on others than other types: people with this psychotype are altruists, volunteers, HR managers, psychologists, and psychiatrists. They inspire trust because sensitive types usually have a kind and attentive facial expression.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Riy2cT431sumSKWY" /></figure><p>From this comes the advantages of the sensitive core — the ability to feel another person, empathize, and help. These are the people with unconditional acceptance: everyone is good, everyone has rights, and they have the right to be who they are now. The sensitive core feels the need to catch up with others and help them. Therefore, for people in HR teams, hard skills are their soft skills.</p><p>One of the characteristics of this psychotype is that a person with a sensitive core may become depressed. Not in a clinical sense, but in losing their identity without others and forgetting to take care of themselves. Sensitive people tend to engage in self-digging. They often think, “What will people think of me? How do I look? I need to help. I can’t help but help.” However, such self-digging is not always productive and can lead to a state of permanent sadness.</p><p>An example of a character with a sensitive type is Amélie from the movie “Amélie”.</p><h4>How a sensitive type can support themselves</h4><p>When a person wants to help everyone, be friends with everyone, and listen to everyone, the question arises: how can they support themselves? In my examples, I will consider working in an HR team because, in business, the sensitive core is most often encountered in such positions.</p><p>There are three main support scenarios, and they complement each other:</p><ol><li>Reflection and self-development.</li><li>Support from a specialist.</li><li>Help from the team.</li></ol><h4>Reflection and self-development</h4><p>It is important for a person with a sensitive core to learn to realize and accept that they need to take care of themselves in order not to fall into despair and not to be morally exhausted. Therefore, it is worth starting with reflection, gradually turning it into a productive direction. Specialized literature can help with this, for example, “Emotional Intelligence: Why It Can Mean More Than IQ” by Daniel Goleman.</p><h4>Support from a specialist</h4><p>I think you’ve heard that every psychotherapist has their own psychotherapist. This is pure truth: mature specialists always have a supervisor. This can be a psychologist, psychiatrist, coach, or mentor. For a person with a depressive core, such support is truly useful and necessary, so do not hesitate to seek external help.</p><h4>Help from the team</h4><p>In our BP team, we practice team meetings with case analysis, where everyone can share their difficult tasks and ask for advice on how to act in a specific situation. Teamwork is essential because it impartially assesses the situation, which helps to get recommendations for interaction, and also raises rational questions that reduce the emotional level towards a particular case.</p><p>Team supervision provides a sensitive person with more algorithms for solving various cases. This allows them to make decisions not based on an emotional surge, but with a clear understanding of why a particular action is needed.</p><p>I recommend holding general supervision meetings for HR employees at least once a week. There are many different tasks in this position, and such regularity is optimal. Even if there is nothing to say at the moment, it is still worth attending general meetings. Supervision greatly enriches and fills with experience and practice. You listen to what others are solving and think, “I had such a situation. I suffered all weekend, thinking about George to whom I said something wrong.” And a colleague’s decision, different from yours, expands variability and develops thinking not only in terms of human interaction but also in terms of business context.</p><p>In my experience, the optimal number of supervision participants is 4–5 people. Then everyone has the opportunity to speak if necessary, but there is no overload of opinions. When 10 people ask their questions and offer solutions, you easily forget what the first person said.</p><p>At Quadcode, we don’t have a fixed team supervision protocol. Whoever has a request speaks up. The importance of the case and the desire to hear opinions from others are crucial. If I don’t want to tell or listen, nothing will work.</p><p>Anonymity is also essential. Supervision participants should not share what happens during the session. As an HR team, we provide this guarantee to our colleagues. You can generally avoid mentioning specific names when discussing a case; this is also a workable scenario. Another critical element is being non-judgmental. Opinions should only be expressed through a personal point of view: not “you are wrong,” but “it seems to me that you did something wrong.”</p><p>I can also recommend using the emotional waterfall technique at meetings. You may have noticed in life that when something happens, you want to call everyone and tell them everything. But after a certain number of calls, everything sounds less emotional. The first listeners get the maximum amount of details and vivid emotions, while the fifth gets: “Here’s the story. They hurt me. But I figured it out.” This is the waterfall effect, which helps to reduce the emotional response and switch to rational thinking.</p><p>If the first waterfall occurs in a safe environment for the employee and among those who listen, it is beneficial and reduces burnout. HR burnout is directly related to emotionality and overloading. If you can relieve the first wave, exhale, and think rationally, it becomes easier. It is also helpful to conduct a mini-retrospective with colleagues — what you did right and what you can do more. This helps to stabilize and expand the variability of future decisions.</p><p>In terms of supervising moderation, I would base it on the HR team itself. How self-organized and mature is it? If we have young HRs with little experience, it is worth involving an external moderator: an outsider or a manager with greater facilitation competencies.</p><h3>In conclusion</h3><p>I believe that understanding and recognizing one’s own psychotype and the psychotypes of others is the key to successful and comfortable communication. Don’t be afraid of your peculiarities or the peculiarities of others, as they help us look at situations from different angles and find the best solutions in life and business.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=940d69775f09" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/the-core-of-personality-strengths-and-weaknesses-of-different-types-940d69775f09">The Core of Personality: Strengths and Weaknesses of Different Types</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Build an Employer’s Brand from Scratch]]></title>
            <link>https://medium.com/quadcode-life/how-to-build-an-employers-brand-from-scratch-4ef8b96fa2ea?source=rss----526934940ae0---4</link>
            <guid isPermaLink="false">https://medium.com/p/4ef8b96fa2ea</guid>
            <category><![CDATA[employer-branding]]></category>
            <category><![CDATA[recruiting]]></category>
            <category><![CDATA[culture]]></category>
            <category><![CDATA[employee-engagement]]></category>
            <dc:creator><![CDATA[Olga Berezova]]></dc:creator>
            <pubDate>Wed, 15 Feb 2023 14:36:25 GMT</pubDate>
            <atom:updated>2023-02-15T14:41:45.920Z</atom:updated>
            <content:encoded><![CDATA[<p>If you don’t know anything about brand strategies but you’re faced with the task of promoting a company’s brand or finding a brand manager, then this article will help you navigate this sea of creativity and communication.</p><p>My name is Olya, and for 5 years I’ve been developing an employer’s brand in IT. Today I want to share my vision of how to better organize this process.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZhraQHe4MeIbBGa6UaeDew.png" /></figure><h3>Types of Company Brands</h3><p>A company’s brand most often works for two key audiences:</p><ul><li><strong>A Product/Business brand</strong> is when a brand works with customers and communicates to them the value of your product/service.</li><li><strong>An employer’s brand</strong> is when a brand works with current/potential employees and communicates to them the value that you’re an excellent employer.</li></ul><p>The brand can also help promote the image of the company among potential investors, the government, Martians — the specifics of the audience will influence the strategy. But the set of tools will be similar: articles, videos, events, etc.</p><p>Bottom line: no matter what brand you decide to develop from scratch because the tools for its promotion are identical. And the steps below will help you structure your brand building.</p><h3><strong>How to Build a Brand: a Manual</strong></h3><h4><strong>Method #1: Launching MVP</strong></h4><p>If you need to do it quickly, inexpensively, and you can turn a blind eye to the quality, then I suggest starting with the MVP — the minimum version of the product sufficient to test the idea. Suddenly no one wants your brand, and you’ve already hired a staff of brand managers and gone broke on salaries.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*4um-_FtPQqbxKjnrQ9Odgg.jpeg" /></figure><p><strong>MVP Recipe:</strong></p><ul><li>Choose 2 of your favorite colors.</li><li>Use templates to create the first layouts in your colors for example via <a href="http://www.canva.com">Canva</a>.</li><li>Come up with a slogan. Take your favorite 5–7 slogans, mix them up, and add your brand name. Voilà!</li><li>Create a profile on one of the social networks.</li><li>Fill out the profile with 5–7 posts. Images can be found on photo stocks or taken with a phone. Or you can delegate both visual and text creation to neural networks. For example, use <a href="https://www.photoleapapp.com/">Photoleap app</a> for images and <a href="https://chat.openai.com/auth/login">ChatGPT</a> for texts.</li><li>Let your heart guide you in setting your target. If there’s no money for it, then let all employees, their friends, and relatives make reposts.</li><li>Collect the reaction and the first results.</li></ul><p><strong>Of the advantages:</strong> I think that in a week, you can get the first feedback and understand how to move forward. This is the fastest way for a brand to make itself known.</p><p><strong>Of the minuses:</strong> it’s difficult to talk here about a positive impression of a brand created “quick and dirty” using whatever is available and of attracting target customers; it’s a rather risky bet.</p><h4>Method #2: Going Strategic</h4><p>If you want to approach a brand launch in a more deliberate and high-quality way, then set aside time and money. A brand, in general, is an ephemeral thing. You invest in it, but it’s almost impossible to know for sure if they bought your loaf of bread because they trust the brand or because you put up a discount. This is the magic of superior branding knowledge; I myself am still in search of an ironclad formula.</p><p>Let’s start with self (brand) awareness. At the first stage, it’s important to understand what kind of brand you are, how you’re already perceived, and at what point you want to arrive. Research will help with this.</p><p>I’m going to talk about the employer brand, but you can also try this approach on product strategies.</p><p><strong>Step 1 — internal research</strong></p><ul><li>In-depth interviews with top management/HR/marketing.</li></ul><p>I calculate the time taking into account the fact that you’re the only brand manager, and you still have other tasks.</p><p>Approximate time: 2 weeks (15 interviews).</p><p>The purpose of the interview is to create a starting point and formulate measurable expectations of the result.</p><p>Sample questions:</p><p>● How many times in the last month have you encountered the need for a strong employer brand?</p><p>● Why is brand development important to you?</p><p>● How would you formulate the first steps that will help the company build a strong employer brand?</p><p>If you want to take a ready-made workflow and look more impressive in the eyes of your colleagues, here’s a link <a href="https://www.usertesting.com/blog/20-questions-every-product-manager-should-ask">to examples of from CustDev questions</a> that can easily be adapted to the task at hand.</p><p><strong>Step 2 — analysis of competitors</strong></p><p>Make a list of actual and desired competitors. For example, you really like the respectable Google, and you put it in the competition column. But! Check how comparable your sales volumes, budgets, and media weight are. If these indicators of your company are still far from Google, it’s better to remove Google from the list of competitors (at least in the first year of brand development).</p><blockquote>The key idea: choose those competitors who you have enough current budget to compete with and to whom your audience is already going.</blockquote><p>Next, prepare criteria for comparing competitors. For example, in my projects for employer brands, it can be: the frequency of mentions in the media, the number of employees, EVP (employer value proposition), the number of external channels, the number of events, the number of reviews and their quality, etc.</p><p>Approximate time: 1 week (7 competitors).</p><p>Formats for analysis:</p><ul><li>The good ol’ Excel spreadsheet where you enter all your competitors and compare them by common criteria.</li></ul><p>And for fans of fashionable frameworks:</p><ul><li><a href="https://vc.ru/design/308331-kak-s-pomoshchyu-karty-censydiam-sozdat-pozicionirovanie-brenda">Censydiam</a>.</li><li><a href="https://plenum.ru/blog/brand-map/">Mapping competitors</a>.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/981/1*xmh9BHrwoWqO0ZmRQGFCwQ.png" /></figure><p><strong>Step 3 — study the target audience</strong></p><p>At this stage it’s important to specify concretely a portrait of your target audience (TA). This is where heart-to-heart conversations and in-depth interviews with the TA will help you.</p><p>Approximate time: 1.5 weeks (20 interviews).</p><p>Goal: To understand the customer/user/employee path (CJM — Customer Journey Mapping) and audience demand.</p><p>Sample questions:</p><ul><li>How did you find your last job?</li><li>Have you been in contact with any no-name companies?</li><li>How would you rate our site/social network on a 10-point scale?</li></ul><p>For a link to other CusDev questions, see step 1.</p><p>Then you need to analyze the answers and break them down into 2–3 key blocks.</p><p>For example, in the course of your research, your respondents often mentioned corporate events and their attitude toward them. Collect their quotes related to corporate parties and suggest a heading that would combine similar quotes. In our case, the target audience’s quotes were well illustrated and unified by the heading: Corporate parties are great, but you can’t attach them to a resume.</p><p><strong>Step 4 — formation of the concept</strong></p><p>The research has been completed, and a lot of material has been accumulated. The next goal is to choose a concept that meets the demands of the TA and the business.</p><p>To do this, you need to understand the data and identify the most common responses from the TA, competitors, and stakeholders. Next, you need to generate a Reason to Believe (RTB) list of benefits of your brand. RTB can be divided into emotional (for example: <em>we build technologies of the future)</em> and rational (for example: <em>we give all employees the latest models of tech equipment).</em></p><p>After that, you need to strengthen communication with insight and hypothesis, which will further help you build a creative campaign. Simply put, without academic chic, <strong>insight</strong> is a situation close to the consumer, which contains some actual problems for them (more info <a href="https://umi-innovation.com/blog/market-insight-definition/">here</a>). A <strong>hypothesis</strong> is an assumption about the target audience that can be confirmed or refuted during the research (more info <a href="https://www.widerfunnel.com/blog/how-to-write-a-hypothesis/">here</a>).</p><p>I gave an example in the table.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UR9imcO_lCvFIM8pXcVLCA.png" /></figure><p>This is NOT rocket science. If you need me to describe this item in more detail, then write to me in the comments, and I’ll talk about it in a separate piece.</p><p><strong>Step 5 — formation of a communication strategy</strong></p><p>This is where you need to bring a copywriter on board to help create a strategy and maintain further communication.</p><p>You need to show the copywriter:</p><ul><li>Conception.</li><li>RTB.</li><li>Competitor analysis</li></ul><p>Next, you need to form a clear task for the creation of a communication strategy and content plan for various platforms (social networks, blog, website), audiences, etc.</p><p>Approximate time of creative flight and analysis: 2 weeks.</p><p>Participants: brand manager, copywriter, creator/art director, outstanding representatives of TA for joint brainstorming.</p><p><strong>Step 6 — create a corporate identity</strong></p><p>Recall what has already been done before, add a little, and take it to the designer:</p><ol><li>Competitor analysis.</li><li>Brand positioning (audience description + hypothesis/insight + RTB + communication strategy and content plan).</li><li>Visual references.</li><li>A list of platforms/formats that need to be branded.</li></ol><p>Approximate time: 2–3 weeks (along with discussions, approvals).</p><p>Remember that, in general, the brand, text, design are very subjective things, so coordination and testing can be delayed considerably. But all the work done above just allows you to rely on the data and translate doubts into a constructive channel.</p><p><strong>Step 7 — launching a brand platform</strong></p><p>Congratulations! You’ve done a great job, and now it’s time to go out into the world.</p><p>What might the timing be for launching a brand from scratch?</p><ul><li>In the first month, launch the main channels of communication, such as the landing page and the social network. We drive traffic to them and collect feedback.</li><li>After a quarter, if necessary, we edit the brand strategy and launch additional channels, testing new formats to increase the number of contacts with the audience. This is called activation of the brand platform, it has a lot of its own nuances for a separate article.</li></ul><h3>What We Ended up with</h3><p>In the article, I briefly described the structure and steps to help you build a brand from scratch. If you’d like to know more about any of the steps, then let me know, and I’ll try to respond in a new article. And if you have a more specific question, then write to me <a href="https://www.linkedin.com/in/olga-berezova-34342210b/">on LinkedIn</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4ef8b96fa2ea" width="1" height="1" alt=""><hr><p><a href="https://medium.com/quadcode-life/how-to-build-an-employers-brand-from-scratch-4ef8b96fa2ea">How to Build an Employer’s Brand from Scratch</a> was originally published in <a href="https://medium.com/quadcode-life">Quadcode</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>