<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Samar Acharya on Medium]]></title>
        <description><![CDATA[Stories by Samar Acharya on Medium]]></description>
        <link>https://medium.com/@samar.acharya?source=rss-5144571fc87b------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 07 Apr 2026 20:08:34 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@samar.acharya/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Building Heroku Review Apps on Bitbucket with Bitbucket Pipelines]]></title>
            <link>https://medium.com/@samar.acharya/building-heroku-review-apps-on-bitbucket-with-bitbucket-pipelines-ac18dca6b77e?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/ac18dca6b77e</guid>
            <category><![CDATA[continuous-integration]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[heroku]]></category>
            <category><![CDATA[bitbucket]]></category>
            <category><![CDATA[bitbucket-pipelines]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Fri, 22 May 2020 22:07:59 GMT</pubDate>
            <atom:updated>2020-05-22T22:11:08.251Z</atom:updated>
            <content:encoded><![CDATA[<p>Github and Heroku integration had this awesome review apps generation feature that would create review apps for every pull request created and it would also re-deploy the review app if the pull request was updated. Pretty neat feature and helped our product team to easily review and run acceptance on features. Then, we had to move to bitbucket for unifying the entire engineering team under single provider. Most of the engineering was on bitbucket and only few of us on github so the obvious option seemed to be moving us to bitbucket.</p><p>This post outlines how I was able to setup and create the similar process as the review apps on bitbucket using bitbucket pipelines. Note that we already have appropriate app.json as our <a href="https://devcenter.heroku.com/articles/app-json-schema">application manifest</a> so make sure you have yours as well as appropriate for your application.</p><h3><strong>Part 1: Review Apps Generation/Updates</strong></h3><blockquote>We start by defining bitbucket pipeline. This is done by creating <strong>bitbucket-pipelines.yml</strong> file and defining your pipeline in there.</blockquote><pre>image: techgaun/awscli-git:1.16.241</pre><pre>clone:<br>  depth: full</pre><pre>pipelines:<br>  pull-requests:<br>    &#39;**&#39;:<br>      - step:<br>          script:<br>            - bash &quot;${BITBUCKET_CLONE_DIR}/devops/review.sh&quot;</pre><p>BITBUCKET_CLONE_DIR is one of the several <a href="https://confluence.atlassian.com/bitbucket/variables-in-pipelines-794502608.html">default environment variables</a> available for builds in bitbucket. The techgaun/awscli-git:1.16.241 is the docker image I’ve created for our purpose that has aws-cli (version 1.16.241) and git installed.</p><p>In the above pipeline, for every pull-requests created or updated, we are executing a bash script that is in your &lt;project_root&gt;/devops/review.sh .</p><p>Next, this is what our review.sh script which is the actual meat looks like:</p><pre>#!/bin/bash</pre><pre>cd &quot;${BITBUCKET_CLONE_DIR}&quot;</pre><pre>PROJECT_NAME=&quot;${BITBUCKET_REPO_SLUG}-stage-pr-${BITBUCKET_PR_ID}&quot;<br>PROJECT_URL=&quot;<a href="https://${PROJECT_NAME}.herokuapp.com">https://${PROJECT_NAME}.herokuapp.com</a>&quot;<br>NOTIFICATION_TEXT=&quot;Review App for ${BITBUCKET_GIT_HTTP_ORIGIN}/pull-requests/${BITBUCKET_PR_ID} is deployed &amp; available at ${PROJECT_URL}&quot;</pre><pre>content_type=&quot;Content-Type: application/json&quot;<br>accept=&quot;Accept: application/vnd.heroku+json; version=3&quot;<br>auth=&quot;Authorization: Bearer ${HEROKU_API_KEY}&quot;</pre><pre>echo -e &quot;Checking if the review app for this PR already exists or not\n&quot;<br>curl -ns &quot;<a href="https://api.heroku.com/apps/${PROJECT_NAME">https://api.heroku.com/apps/${PROJECT_NAME</a>}&quot; -H &quot;${accept}&quot; -H &quot;${auth}&quot;<br>current_app_id=$(curl -ns &quot;<a href="https://api.heroku.com/apps/${PROJECT_NAME">https://api.heroku.com/apps/${PROJECT_NAME</a>}&quot; -H &quot;${accept}&quot; -H &quot;${auth}&quot; | jq -r &#39;.id&#39;)</pre><pre>echo -e &quot;Current app id is: ${current_app_id}\n&quot;</pre><pre>if [[ &quot;${current_app_id}&quot; = &quot;null&quot; || &quot;${current_app_id}&quot; = &quot;not_found&quot; ]]; then<br>  TGZ_FILE=&quot;${PROJECT_NAME}.tgz&quot;<br>  S3_PATH=&quot;s3://${S3_BUCKET_ARTIFACT}/${TGZ_FILE}&quot;</pre><pre>git archive --format=tar.gz -o &quot;${TGZ_FILE}&quot; &quot;${BITBUCKET_COMMIT}&quot;<br>  aws s3 cp &quot;${TGZ_FILE}&quot; &quot;${S3_PATH}&quot;</pre><pre>S3_URL=$(aws s3 presign &quot;${S3_PATH}&quot;)</pre><pre>trap &quot;aws s3 rm ${S3_PATH}&quot; EXIT</pre><pre>app_id=$(curl -ns -X POST <a href="https://api.heroku.com/app-setups">https://api.heroku.com/app-setups</a> \<br>    -H &quot;${content_type}&quot; \<br>    -H &quot;${accept}&quot; \<br>    -H &quot;${auth}&quot; \<br>    -d &quot;{\&quot;app\&quot;: {\&quot;name\&quot;: \&quot;${PROJECT_NAME}\&quot;, \&quot;personal\&quot;: false, \&quot;organization\&quot;: \&quot;zego\&quot;, \&quot;locked\&quot;: false},\&quot;source_blob\&quot;: {\&quot;url\&quot;: \&quot;${S3_URL}\&quot;}, \&quot;overrides\&quot;: {\&quot;env\&quot;: {\&quot;HEROKU_APP_NAME\&quot;: \&quot;${PROJECT_NAME}\&quot;}}}&quot; \<br>    | jq -r &#39;.id&#39;)</pre><pre>echo -e &quot;Created heroku app with id: ${app_id}&quot;<br>  curl -ns <a href="https://api.heroku.com/app-setups/${app_id">https://api.heroku.com/app-setups/${app_id</a>} -H &quot;${accept}&quot; -H &quot;${content_type}&quot; -H &quot;${auth}&quot;</pre><pre>while [[ `curl -ns <a href="https://api.heroku.com/app-setups/${app_id">https://api.heroku.com/app-setups/${app_id</a>} -H &quot;${accept}&quot; -H &quot;${content_type}&quot; -H &quot;${auth}&quot; | jq -r &#39;.status&#39;` = &quot;pending&quot; ]]; do<br>    sleep 2<br>    printf &quot;.&quot;<br>  done</pre><pre>curl -ns <a href="https://api.heroku.com/app-setups/${app_id">https://api.heroku.com/app-setups/${app_id</a>} -H &quot;${accept}&quot; -H &quot;${content_type}&quot; -H &quot;${auth}&quot;<br>  echo &quot;&quot;</pre><pre>if [[ `curl -ns <a href="https://api.heroku.com/app-setups/${app_id">https://api.heroku.com/app-setups/${app_id</a>} -H &quot;${accept}&quot; -H &quot;${content_type}&quot; -H &quot;${auth}&quot; | jq -r &#39;.status&#39;` = &quot;succeeded&quot; ]]; then<br>    echo -e &quot;\nCompleted setting up heroku app!!! Please visit ${PROJECT_URL}&quot;<br>    curl -X POST -H &quot;${content_type}&quot; --data &quot;{\&quot;text\&quot;:\&quot;${NOTIFICATION_TEXT}\&quot;}&quot; &quot;${SLACK_WEBHOOK_URL}&quot;<br>  else<br>    echo &quot;Failed to setup heroku app!!!&quot;<br>    false<br>  fi<br>else<br>  git push --force &quot;<a href="https://heroku:${HEROKU_API_KEY}&lt;a href=">@git</a>.heroku.com/${PROJECT_NAME}.git&quot;&gt;https://heroku:${HEROKU_API_KEY}<a href="http://twitter.com/git">@git</a>.heroku.com/${PROJECT_NAME}.git&quot; &quot;${BITBUCKET_COMMIT}:refs/heads/master&quot;<br>  echo -e &quot;\nStarted deploying on heroku app!!! Please visit ${PROJECT_URL}&quot;<br>  curl -X POST -H &quot;${content_type}&quot; --data &quot;{\&quot;text\&quot;:\&quot;${NOTIFICATION_TEXT}\&quot;}&quot; &quot;${SLACK_WEBHOOK_URL}&quot;<br>fi</pre><p>Lets dissect the above shell script:</p><ul><li>we construct PROJECT_NAME using repo slug and pr id and use that to create project url so the heroku app would be available at https://&lt;your-repo-slug&gt;-stage-pr-&lt;pr_number&gt;.herokuapp.com</li><li>we setup notification text (sent to slack for notifying people watching that channel for running acceptance) and various headers necessary for heroku.</li><li>we check if the review app already exists or not using our project name.</li><li>if the review app does not exist, we create an tar archive of current codebase, copy that to S3 bucket, and make an API call to heroku to create a new app with the given project name and the pre-signed source url of S3 object. Once that happens, we wait until the heroku deploy completes and once it does we send a notification to slack. If the deploy fails, we exit with an error message</li><li>if the review app already exists (pull request was created in the past and a new commit was pushed to the PR branch), then we simply perform git push operation to heroku git.</li></ul><p>Finally, once we have the pipeline and shell setup, we need to do some configurations on bitbucket:</p><ul><li>Enable the bitbucket pipeline from <a href="https://bitbucket.org/paylease/residentweb/admin/addon/admin/pipelines/settings">https://bitbucket.org/&lt;your_user_or_org&gt;/&lt;repo&gt;/admin/addon/admin/pipelines/settings</a></li><li>Add the appropriate repository variables in <a href="https://bitbucket.org/paylease/residentweb/admin/addon/admin/pipelines/repository-variables">https://bitbucket.org/</a><a href="https://bitbucket.org/paylease/residentweb/admin/addon/admin/pipelines/settings">&lt;your_user_or_org&gt;/&lt;repo&gt;</a><a href="https://bitbucket.org/paylease/residentweb/admin/addon/admin/pipelines/repository-variables">/admin/addon/admin/pipelines/repository-variables</a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/855/1*ECTqeIHRE43RfE9oXbaBZQ.png" /><figcaption>Repository Variables Configuration</figcaption></figure><p>Note that the AWS key-pair listed above must be able to write to the S3 bucket and create a pre-signed URL so that the heroku can access the code archive file.</p><p>Once everything is configured, you should now be able to see pipeline executing for your open pull requests.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AH1tM9cXTzmxI8Fgy6NSiw.png" /><figcaption>Bitbucket Builds</figcaption></figure><p>You can see the logs and URL of the herokuapp printed by above bash script from earlier in pipeline build.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PrKMhTQeiTP_zgLylYbnEA.png" /></figure><p>We’ve a working review apps generation but there’s more to it. We need to be able to clean up the review apps once the pull request is merged (or declined).</p><h3><strong>Part 2: Tearing Down The Review Apps</strong></h3><p>Once the PRs are merged (or declined), we need to clean up the generated review app. When we built out this pipeline, there was no way to do so in bitbucket pipeline. We could have sure leveraged some ways on the target branches of PRs on merge but then it would not work for declined. So, I looked for alternatives. In the meantime, I was in a need of a webhook executor for other works because most of our stuff are behind the firewall and bitbucket can not reach out many of such pieces we’ve behind the firewall. So, I built a public-facing webhook service using <a href="https://github.com/adnanh/webhook">adnanh/webhook</a> that allowed us to execute arbitrary shell commands for any registed webhook payload and source.</p><p>So we create a hooks.json and add matching rule for bb-review-apps:</p><pre>[<br>  {<br>    &quot;id&quot;: &quot;bb-review-apps&quot;,<br>    &quot;execute-command&quot;: &quot;/opt/webhooks/scripts/stop-review-app.sh&quot;,<br>    &quot;command-working-directory&quot;: &quot;/opt/webhooks/scripts&quot;,<br>    &quot;pass-arguments-to-command&quot;: [<br>      {<br>        &quot;source&quot;: &quot;payload&quot;,<br>        &quot;name&quot;: &quot;repository.name&quot;<br>      },<br>      {<br>        &quot;source&quot;: &quot;payload&quot;,<br>        &quot;name&quot;: &quot;pullrequest.id&quot;<br>      }<br>    ],<br>    &quot;trigger-rule&quot;: {<br>  &quot;match&quot;: {<br>    &quot;type&quot;: &quot;regex&quot;,<br>    &quot;value&quot;: &quot;(MERGED|DECLINED)&quot;,<br>    &quot;parameter&quot;: {<br>      &quot;source&quot;: &quot;payload&quot;,<br>      &quot;name&quot;: &quot;pullrequest.state&quot;<br>    }<br>  }<br>    }<br>  }<br>]</pre><p>We create the stop-review-app.sh file and place it in /opt/webhooks/scripts directory:</p><pre>#!/bin/bash</pre><pre># ${1} -&gt; repository.name<br># ${2} -&gt; pullrequest.id</pre><pre>HEROKU_API_KEY=&quot;&lt;REPLACE_WITH_YOUR_HEROKU_API_KEY&gt;&quot;</pre><pre>PROJECT_NAME=&quot;${1}-stage-pr-${2}&quot;</pre><pre>content_type=&quot;Content-Type: application/json&quot;<br>accept=&quot;Accept: application/vnd.heroku+json; version=3&quot;<br>auth=&quot;Authorization: Bearer ${HEROKU_API_KEY}&quot;</pre><pre>echo -e &quot;Shutting down review app for ${PROJECT_NAME}\n&quot;</pre><pre>curl -s -XDELETE &quot;<a href="https://api.heroku.com/apps/${PROJECT_NAME">https://api.heroku.com/apps/${PROJECT_NAME</a>}&quot; -H &quot;${accept}&quot; -H &quot;${content_type}&quot; -H &quot;${auth}&quot;</pre><p>Now, you can run the webhook server:</p><pre>./webhook -hooks hooks.json -verbose -hotreload</pre><p>Finally, you can expose the webhook through nginx or other proxy. I used nginx for myself and used letsencrypt to manage the certificates.</p><p>Basically, your configuration would require an upstream definition:</p><pre>upstream webhooks {<br>  server 127.0.0.1:9000;  <br>}</pre><p>and then a server block with a location block that <em>proxy_pass</em>es to the <em>webhooks</em> upstream.</p><p>Finally, add webhook configuration to your bitbucket repository webhooks. This would require you to choose Pull Request Merged and Declined as the triggers for the webhook to receive data. See the screenshot below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/797/1*XYGW2AfQ4AnJBdkAcn-IPw.png" /><figcaption>Bitbucket Webhook Configuration</figcaption></figure><p>And, you got a complete review app setup with bitbucket and heroku. I hope it becomes useful for someone out there.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ac18dca6b77e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Easy Bitbucket to GitHub Migration]]></title>
            <link>https://medium.com/@samar.acharya/easy-bitbucket-to-github-migration-d7eb76888a6c?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/d7eb76888a6c</guid>
            <category><![CDATA[github]]></category>
            <category><![CDATA[bitbucket]]></category>
            <category><![CDATA[git]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Fri, 22 May 2020 19:16:43 GMT</pubDate>
            <atom:updated>2020-05-22T22:10:31.295Z</atom:updated>
            <content:encoded><![CDATA[<h3>Easy Bitbucket to GitHub Mass Migration</h3><p>Recently, we went through the process of the bitbucket to github migration and several of us were tasked with moving repositories to github for the ones we were admins.</p><h3><strong>Part 1: Moving The Repositories</strong></h3><p>So I wrote a quick bash script that takes a file with name of repositories separated by newline. The input could have been response of bitbucket API call to retrieve the list of repositories but I was not migrating all of them so it was just easier for me to compile the list in a file. It would have been a mess for me to manage 45+ repositories and configure the base branch protection rules and basic config on each of them if I had to do the manual migration.</p><ul><li>(bare) clones repo from bitbucket</li><li>creates repo on github with internal visibility (we are using enterprise)</li><li>allows write permission to engineering and qe teams</li><li>pushes all the branches to github</li><li>sets up branch protection rules</li></ul><p>As long as there’re no configuration changes on branch protection and passed repo settings, this script is totally re-runnable for the migrated repository (for example, to re-sync branches).</p><p>And, usage is pretty simple:</p><p>Create a <a href="https://github.com/settings/tokens">personal access token</a> with proper access to create repo and use that as GH_TOKEN envvar. The sample command looks as below:</p><pre>BB_ORG=&lt;bb_org_name&gt; GH_ORG=&lt;gh_org_name&gt; GH_TOKEN=&lt;github_token&gt; ./repo-mover.sh &lt;file_with_repo_name_list&gt;</pre><p><a href="https://github.com/techgaun/tg-misc-scripts/blob/master/bb-to-github-repo.sh">techgaun/tg-misc-scripts</a></p><pre>#!/usr/bin/env bash<br><br># moves repos listed in a file from bb to gh<br># usage: BB_ORG=&lt;bb_org_name&gt; GH_ORG=&lt;gh_org_name&gt; GH_TOKEN=&lt;github_token&gt; ./repo-mover.sh &lt;file_with_repo_name_list&gt;<br><br>set -euo pipefail<br><br>BB_ORG=&quot;${BB_ORG:-paylease}&quot;<br>GH_ORG=&quot;${GH_ORG:-gozego}&quot;<br>GH_USER=&quot;${GH_USER:-techgaun}&quot;<br>GH_CRED=&quot;${GH_USER}:${GH_TOKEN}&quot;<br><br>ROOT=$(pwd)<br>BRANCH_PROTECTION_BODY=&quot;$(cat &lt;&lt;-EOF<br>{<br>  &quot;required_status_checks&quot;: {<br>    &quot;strict&quot;: true,<br>    &quot;contexts&quot;: []<br>  },<br>  &quot;enforce_admins&quot;: true,<br>  &quot;required_pull_request_reviews&quot;: {<br>    &quot;dismissal_restrictions&quot;: {<br>      &quot;users&quot;: [],<br>      &quot;teams&quot;: [<br>        &quot;qe&quot;,<br>        &quot;engineering&quot;<br>      ]<br>    },<br>    &quot;dismiss_stale_reviews&quot;: true,<br>    &quot;require_code_owner_reviews&quot;: true,<br>    &quot;required_approving_review_count&quot;: 2<br>  },<br>  &quot;restrictions&quot;: {<br>    &quot;users&quot;: [],<br>    &quot;teams&quot;: [<br>      &quot;qe&quot;<br>    ]<br>  },<br>  &quot;allow_force_pushes&quot;: false,<br>  &quot;allow_deletions&quot;: false<br>}<br>EOF<br>)&quot;<br><br>while read repo; do<br>  echo<br>  echo &quot;Starting repo creation for ${repo}...&quot;<br><br>  cd &quot;${ROOT}&quot;<br>  rm -rf &quot;${repo}.git&quot;<br>  git clone --bare &quot;git@bitbucket.org:${BB_ORG}/${repo}.git&quot;<br>  cd &quot;${repo}.git&quot;<br>  curl -H &#39;Accept: application/vnd.github.nebula-preview+json&#39; -u &quot;${GH_CRED}&quot; &quot;https://api.github.com/orgs/${GH_ORG}/repos&quot; -d &quot;{\&quot;name\&quot;: \&quot;${repo}\&quot;, \&quot;private\&quot;: true, \&quot;visibility\&quot;: \&quot;internal\&quot;, \&quot;delete_branch_on_merge\&quot;: true}&quot;<br>  curl -XPUT -u &quot;${GH_CRED}&quot; &quot;https://api.github.com/orgs/${GH_ORG}/teams/engineering/repos/${GH_ORG}/${repo}&quot; -d &#39;{&quot;permission&quot;: &quot;push&quot;}&#39;<br>  curl -XPUT -u &quot;${GH_CRED}&quot; &quot;https://api.github.com/orgs/${GH_ORG}/teams/qe/repos/${GH_ORG}/${repo}&quot; -d &#39;{&quot;permission&quot;: &quot;push&quot;}&#39;<br>  git push --mirror &quot;git@github.com:${GH_ORG}/${repo}.git&quot;<br>  echo &quot;${BRANCH_PROTECTION_BODY}&quot; | curl -H &#39;Accept: application/vnd.github.luke-cage-preview+json&#39; -XPUT -u &quot;${GH_CRED}&quot; &quot;https://api.github.com/repos/${GH_ORG}/${repo}/branches/master/protection&quot; -d @-<br><br>  echo &quot;Completed repo creation for ${repo}...&quot;<br>  cd &quot;${ROOT}&quot;<br>  rm -rf &quot;${repo}.git&quot;<br>done &lt; &quot;${1}&quot;</pre><h3><strong>Part 2: Updating Your Local Origin</strong></h3><p>I work on several of the repositories that I had migrated. So naturally updating remotes on local filesystem can be a daunting task. So I wrote another bash script to do that.</p><p><a href="https://github.com/techgaun/tg-misc-scripts/blob/master/mass-git-remote-update.sh">techgaun/tg-misc-scripts</a></p><pre>#!/usr/bin/env bash<br><br># Usage: ${0} [OLD_REMOTE] [NEW_REMOTE] [REMOTE_NAME]<br><br>OLD_REMOTE=&quot;${1:-bitbucket.org:paylease}&quot;<br>NEW_REMOTE=&quot;${2:-github.com:gozego}&quot;<br>REMOTE_NAME=&quot;${3:-origin}&quot;<br><br>find &quot;${PWD}&quot; -type d -name &#39;.git&#39; | while read dir; do<br>  cd &quot;${dir}/..&quot;<br>  current_remote_url=$(git remote get-url &quot;${REMOTE_NAME}&quot;)<br>  if grep &quot;${OLD_REMOTE}&quot; &lt;&lt;&lt; &quot;${current_remote_url}&quot;; then<br>    new_remote_url=$(sed &quot;s/${OLD_REMOTE}/${NEW_REMOTE}/&quot; &lt;&lt;&lt; &quot;${current_remote_url}&quot;)<br>    echo &quot;Changing ${current_remote_url} to ${new_remote_url}&quot;<br>    git remote set-url &quot;${REMOTE_NAME}&quot; &quot;${new_remote_url}&quot;<br>  fi<br>done</pre><p>Basically, you can run the above script placing it on the root of your all projects and execute it by passing 3 arguments; 3rd is optional if you name your remote origin (the default).</p><pre>./mass-git-remote-update.sh &#39;bitbucket.org:&lt;your_bb_org&gt;&#39; &#39;github.com:&lt;your_gh_org&gt;&#39;</pre><p>I know these scripts may not be usable right away due to use-cases specific to you and your team but this should give good basics on how such migrations can be totally automated.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d7eb76888a6c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Mustang — A Slackbot in Elixir]]></title>
            <link>https://tutorials.botsfloor.com/mustang-a-slackbot-in-elixir-de1b74bace4e?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/de1b74bace4e</guid>
            <category><![CDATA[elixir]]></category>
            <category><![CDATA[hubot]]></category>
            <category><![CDATA[bots]]></category>
            <category><![CDATA[slack]]></category>
            <category><![CDATA[chatbots]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Wed, 21 Dec 2016 05:47:23 GMT</pubDate>
            <atom:updated>2017-04-15T05:25:57.712Z</atom:updated>
            <content:encoded><![CDATA[<p>Writing bot is always fun. And now that <a href="https://slack.com/">Slack</a> is so popular, it makes sense to build a bot primarily targeted for Slack. We use Slack a lot at <a href="https://www.brighterlink.io/">work</a> and it is our primary communication medium that we can not live without.</p><p>The primary motivation for me to write the Slackbot was the need for switching through various services quite frequently. For example, I would need to find <a href="https://github.com/techgaun/ex_mustang/blob/master/lib/ex_mustang/responders/gmap.ex">address for places</a> near me or get <a href="https://github.com/techgaun/ex_mustang/blob/master/lib/ex_mustang/responders/time.ex">current time</a> in another timezone. Likewise, I would frequently perform <a href="https://github.com/techgaun/ex_mustang/blob/master/lib/ex_mustang/responders/whois.ex">whois</a> using one of the online services and I would keep on searching for programming help on <a href="https://github.com/techgaun/ex_mustang/blob/master/lib/ex_mustang/responders/howdoi.ex">Stack Overflow</a>. Another motivation was to remind us about the long <a href="https://github.com/techgaun/ex_mustang/blob/master/lib/ex_mustang/responders/github.ex">open pull requests</a> that needs to be reviewed by someone.</p><p>We had been using Elixir for a while and I wanted to play around with Elixir for writing bot. With quick search, I found <a href="https://github.com/hedwig-im/hedwig">Hedwig</a>, a bot framework with support for various adapters and I was happy that I could just offload hard work to Hedwig and just write easy bits of the bot. They even have <a href="https://github.com/hedwig-im/hedwig_slack">Slack adapter</a> so I was able to quickly bootstrap and build the bot.</p><p>I named my bot after my lovely puppy <a href="https://github.com/techgaun/ex_mustang#about-mustang">Mustang</a> who lives in Nepal and was together with me during <a href="https://en.wikipedia.org/wiki/April_2015_Nepal_earthquake">April 2015 Earthquake</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/555/1*DjcpgKPtdZibexwqEtbHaA.jpeg" /><figcaption>Mustang</figcaption></figure><p>Hedwig’s design on high-level is obvious and intuitive to how you would think about bot design and its highly inspired by <a href="https://hubot.github.com/">hubot</a>. The main building block of bots is responders and with Hedwig, you can use Hedwig.Responder to build your responders. All you do to write responder is use the macros like hear and respond to define patterns to listen and then write your logic on how the bot should respond for those patterns. For more details on how to use Hedwig and adapters and how to build responders, you can refer to the <a href="https://hexdocs.pm/hedwig/">official documentation</a>. With the Beam concurrency and scalability, its pretty easy to build bot with high concurrency and low footprints. I’ve not really exploited any of these features yet but that’s the area I should look into.</p><p>The other thing I had issue with Hedwig Slack adapter was to get the current state so that I could get list of channels and then map human-readable channel names to Slack channel identifiers. However, I was able to get around that easily by using erlang sys module as seen <a href="https://github.com/techgaun/ex_mustang/blob/master/lib/ex_mustang/utils.ex#L17-L26">here</a>. Reading through documentation for Hedwig, I did not find an easy way to do this but if there’s any, feel free to suggest.</p><p>I would be happy to get new responders (or ideas in the form of <a href="https://github.com/techgaun/ex_mustang/issues">issues</a>) from everyone. Also, it can be a good way to learn Elixir if you’re just starting with it. Below are the current feature highlights:</p><p><strong>Scheduled checks:</strong></p><ul><li>Github open pull requests watcher to notify long open PRs</li><li>Standup reminder when its time to gather for standup</li><li>Scheduled check of list of usernames/emails against <a href="https://haveibeenpwned.com/">haveibeenpwned.com</a></li><li>Simple uptime monitoring with support for status, content and content type checks and ability to specify method, headers, timeout, etc.</li></ul><p><strong>Responders:</strong></p><ul><li>google maps search</li><li><a href="https://haveibeenpwned.com/">haveibeenpwned.com</a> search</li><li>random quotes</li><li>slap people on slack</li><li>time on given timezone</li><li>unix to iso8601 time conversion</li><li>base64 encoding/decoding</li><li>isitup check to see if a site is down or not</li><li>insult to get random insults for person you mention</li><li>httpcat to get http status code cats</li><li>howdoi to get top result for your programming question fro stackoverflow</li><li>funny random commitmsg</li><li>clifu to get command line fu</li><li>get whois result of domain names</li></ul><p>My personal favorites are howdoi and insult and I love using insult whenever I can.</p><p>Getting ExMustang up and running is pretty easy. All you need to do is create a slackbot from <a href="https://my.slack.com/services/new/bot">HERE</a> and copy the slackbot token. Then, you can follow the setup instructions available <a href="https://github.com/techgaun/ex_mustang#setup">HERE</a>.</p><p>Some screenshots of using ExMustang follow:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/825/1*52ZF2EPCSbZGTjluX-7yAg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/762/1*mZjG3Y0TAhjou07uStrMUA.png" /></figure><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fupscri.be%2F833420-2%3Fas_embed%3Dtrue&amp;url=https%3A%2F%2Fupscri.be%2F833420-2%2F&amp;image=https%3A%2F%2Fupscri.be%2Fmedia%2Fform.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=upscri" width="800" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/a8a74f01324ed7f3468b7695aed644b2/href">https://medium.com/media/a8a74f01324ed7f3468b7695aed644b2/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=de1b74bace4e" width="1" height="1" alt=""><hr><p><a href="https://tutorials.botsfloor.com/mustang-a-slackbot-in-elixir-de1b74bace4e">Mustang — A Slackbot in Elixir</a> was originally published in <a href="https://tutorials.botsfloor.com">Dev Stash</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Saancho — Command Line Password Manager]]></title>
            <link>https://medium.com/@samar.acharya/command-line-password-manager-ecf51d0fb191?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/ecf51d0fb191</guid>
            <category><![CDATA[command-line]]></category>
            <category><![CDATA[linux]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[passwords]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Fri, 02 Dec 2016 10:06:46 GMT</pubDate>
            <atom:updated>2016-12-02T10:07:01.626Z</atom:updated>
            <content:encoded><![CDATA[<p>I’m CLI addict and tend to use CLI for most works I do. One of the things that I recently wanted to use command line was for managing the increased number of credentials that I was creating for various services I use at work and for fun. I did not want to install tools like Lastpass or Keepass and instead started implementing a very simple command line password manager using gnupg2 (with fallback to gnupg).</p><p>My credential management was pretty bad so far as I was almost always doing the following:</p><ul><li><a href="https://github.com/techgaun/bash-aliases/blob/master/.bash_aliases#L14">generate a random password on CLI</a></li><li>save the random password in a text file</li></ul><p>I know saving to text file was pretty stupid so I decided to be better about it as I keep on saying to others about these kind of stupid mistakes. I use gnupg for some other purposes too so I knew I could perfectly use it for this use-case as well.</p><p><strong>Meet </strong><a href="https://github.com/nepalihackers/saancho"><strong>Saancho</strong></a><strong>.</strong></p><p>Saancho is a command-line password manager that uses <a href="https://www.gnupg.org/">gnupg</a> to store credentials with AES256 encryption. All you have to remember is a single passphrase for your credential store and you should be all set for storing your credentials without having to worry about encryption. All thanks to gnupg.</p><p>Installation is super-simple. Just run the command below:</p><pre>sudo wget &quot;https://raw.githubusercontent.com/nepalihackers/saancho/master/saancho&quot; -O /usr/bin/saancho &amp;&amp; sudo chmod +x /usr/bin/saancho</pre><p>When you save credential on <em>saancho</em> for the first time, you will also enter the passphrase which you will be using for all the future credential storage. The interface is very simple menu that I could come up with.</p><p>I’ve planning for few more features and I’ve created <a href="https://github.com/nepalihackers/saancho/issues">issues</a> for those. If you feel like hacking on this, feel free to do so and send me pull requests. I hope to get some feedback on this tool.</p><p>Github Repo: <a href="https://github.com/nepalihackers/saancho">https://github.com/nepalihackers/saancho</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ecf51d0fb191" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Having Fun With Heroku Help System]]></title>
            <link>https://medium.com/@samar.acharya/having-fun-with-heroku-help-system-d4c45803cdd3?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/d4c45803cdd3</guid>
            <category><![CDATA[heroku]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Fri, 14 Oct 2016 09:24:23 GMT</pubDate>
            <atom:updated>2016-10-18T18:00:02.722Z</atom:updated>
            <content:encoded><![CDATA[<p>Heroku is one of the most popular PaaS out there and we are quite satisfied with what it has to offer. Recently, I happened to come across some kind of problem and got a chance to use <a href="https://help.heroku.com/tickets">Heroku help ticket system</a>. While doing so, I came across a feature to share ticket with other organization members or collaborators.</p><p><strong>Sharing tickets with any heroku user</strong></p><p>An organization member can abuse help system and share the ticket to any other heroku user without the orgnization owner knowing about the sharing context. The only requirement is the other user has to be registered user on heroku.</p><pre>curl &#39;https://support-api.heroku.com/tickets/:ticket_id/shares&#39; -X POST -H &#39;Accept: application/vnd.heroku+json; version=1&#39; -H &#39;Authorization: (redacted)&#39; --data &#39;{&quot;user_id&quot;:&quot;&lt;another_user&gt;@example.com&quot;}&#39;</pre><p>The request like above will successfully add any other heroku user.</p><p><strong>Information leak on comments API calls</strong></p><p>When you load tickets on heroku, the comments API is called which loads the comments but also includes <em>actor </em>payload as below:</p><pre>{<br>    &quot;id&quot;: &quot;0596b09e-4c56-40f9-a13a-f43acd637951&quot;,<br>    &quot;created_at&quot;: &quot;2016-10-07T18:43:58Z&quot;,<br>    &quot;updated_at&quot;: null,<br>    &quot;ticket_id&quot;: 409207,<br>    &quot;public&quot;: true,<br>    &quot;body&quot;: &quot;&lt;snipped&gt;&quot;,<br>    &quot;actor_id&quot;: &quot;3ab8a77d-e1d2-446e-905f-32d0a1961dd0&quot;,<br>    &quot;actor&quot;: {<br>        &quot;allow_tracking&quot;: true,<br>        &quot;beta&quot;: false,<br>        &quot;email&quot;: &quot;*****@heroku.com&quot;,<br>        &quot;id&quot;: &quot;3ab8a77d-e1d2-446e-905f-32d0a1961dd0&quot;,<br>        &quot;last_login&quot;: &quot;2016-09-21T16:22:10Z&quot;,<br>        &quot;name&quot;: &quot;*****&quot;,<br>        &quot;sms_number&quot;: &quot;+1 ***-***-****&quot;,<br>        &quot;two_factor_authentication&quot;: true,<br>        &quot;verified&quot;: true,<br>        &quot;created_at&quot;: &quot;2016-03-21T22:47:42Z&quot;,<br>        &quot;updated_at&quot;: &quot;2016-09-23T07:24:37Z&quot;,<br>        &quot;suspended_at&quot;: null,<br>        &quot;default_organization&quot;: null,<br>        &quot;delinquent_at&quot;: null,<br>        &quot;herokai&quot;: true,<br>        &quot;addon_provider&quot;: false<br>    },<br>    &quot;meta_data&quot;: null,<br>    &quot;sfid&quot;: null<br>}</pre><p>As you can see above, the actor information is quite revealing about the user.</p><p><strong>Information leak about organization via tickets API</strong></p><p>The ticket API leaks certain information about an organization to whoever the ticket is shared or even the ticket creator who’s probably not supposed to see those information.</p><pre>curl &#39;https://support-api.heroku.com/tickets/:ticket_id&#39; -H &#39;authorization: Bearer &lt;auth_token&gt;&#39;<br><br>{<br>    &quot;id&quot;: *****,<br>    &quot;score&quot;: 4350,<br>    &quot;complete&quot;: true,<br>    &quot;created_at&quot;: &quot;2016-10-07T16:09:23Z&quot;,<br>    &quot;last_public_change_at&quot;: &quot;2016-10-07T19:09:08Z&quot;,<br>    &quot;first_responded_at&quot;: &quot;2016-10-07T18:43:58Z&quot;,<br>    &quot;last_responded_at&quot;: &quot;2016-10-07T18:43:58Z&quot;,<br>    &quot;updated_at&quot;: &quot;2016-10-07T19:09:08Z&quot;,<br>    &quot;app_name&quot;: null,<br>    &quot;app_id&quot;: null,<br>    &quot;permission_granted&quot;: false,<br>    &quot;state&quot;: &quot;open&quot;,<br>    &quot;priority&quot;: &quot;normal&quot;,<br>    &quot;subject&quot;: &quot;********&quot;,<br>    &quot;requester_id&quot;: &quot;d5b5b77d-6fd4-4af4-a2cc-7eb8c1bf8272&quot;,<br>    &quot;requester&quot;: {<br>        &quot;id&quot;: &quot;*****@users.heroku.com&quot;,<br>        &quot;email&quot;: &quot;*****@*****.com&quot;,<br>        &quot;uuid&quot;: &quot;d5b5b77d-6fd4-4af4-a2cc-7eb8c1bf8272&quot;,<br>        &quot;full_name&quot;: &quot;Samar Acharya&quot;,<br>        &quot;deleted_at&quot;: null<br>    },<br>    &quot;assignee_id&quot;: null,<br>    &quot;assignee&quot;: null,<br>    &quot;actor_id&quot;: &quot;d5b5b77d-6fd4-4af4-a2cc-7eb8c1bf8272&quot;,<br>    &quot;actor&quot;: {<br>        &quot;id&quot;: &quot;****@users.heroku.com&quot;,<br>        &quot;email&quot;: &quot;******@*****.com&quot;,<br>        &quot;uuid&quot;: &quot;d5b5b77d-6fd4-4af4-a2cc-7eb8c1bf8272&quot;,<br>        &quot;full_name&quot;: &quot;Samar Acharya&quot;,<br>        &quot;deleted_at&quot;: null<br>    },<br>    &quot;category&quot;: &quot;account&quot;,<br>    &quot;addon&quot;: null,<br>    &quot;requires_survey&quot;: false,<br>    &quot;comment&quot;: {<br>        &quot;body&quot;: &quot;****&quot;<br>    },<br>    &quot;last_comment&quot;: {<br>        &quot;body&quot;: &quot;****&quot;<br>    },<br>    &quot;views&quot;: [{<br>        &quot;id&quot;: &quot;fc928ccc-c182-4d95-b070-a3fcb57a8f27&quot;,<br>        &quot;name&quot;: &quot;Support&quot;,<br>        &quot;slug&quot;: &quot;support&quot;,<br>        &quot;external_id&quot;: &quot;&quot;,<br>        &quot;ticket_count&quot;: 48,<br>        &quot;addon&quot;: false,<br>        &quot;meta_data&quot;: {<br>            &quot;public_name&quot;: &quot;Support&quot;,<br>            &quot;review_category&quot;: &quot;support&quot;,<br>            &quot;notes&quot;: &quot;&quot;<br>        }<br>    }],<br>    &quot;tags&quot;: [{<br>        &quot;id&quot;: &quot;fc928ccc-c182-4d95-b070-a3fcb57a8f27&quot;,<br>        &quot;name&quot;: &quot;Support&quot;,<br>        &quot;slug&quot;: &quot;support&quot;,<br>        &quot;kind&quot;: &quot;View&quot;,<br>        &quot;external_id&quot;: &quot;&quot;,<br>        &quot;ticket_count&quot;: 48,<br>        &quot;assigned_count&quot;: 3,<br>        &quot;addon&quot;: false,<br>        &quot;created_at&quot;: &quot;2014-10-03T11:22:06.575+00:00&quot;,<br>        &quot;updated_at&quot;: &quot;2016-02-12T16:17:17.358+00:00&quot;,<br>        &quot;surveyable&quot;: true,<br>        &quot;meta_data&quot;: {<br>            &quot;public_name&quot;: &quot;Support&quot;,<br>            &quot;review_category&quot;: &quot;support&quot;,<br>            &quot;notes&quot;: &quot;&quot;<br>        }<br>    }],<br>    &quot;meta_data&quot;: {<br>        &quot;source&quot;: &quot;help&quot;,<br>        &quot;query&quot;: &quot;help redirect premium support&quot;,<br>        &quot;requester_browser_tz_offset&quot;: &quot;-5&quot;,<br>        &quot;requester_browser_agent&quot;: &quot;Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0&quot;,<br>        &quot;support_level&quot;: {<br>            &quot;csa&quot;: true,<br>            &quot;sla&quot;: true,<br>            &quot;assigned_csa&quot;: &quot;****@heroku.com&quot;,<br>            &quot;account_name&quot;: &quot;*****&quot;<br>        },<br>        &quot;status&quot;: {<br>            &quot;status&quot;: [{<br>                &quot;system&quot;: &quot;Apps&quot;,<br>                &quot;status&quot;: &quot;green&quot;<br>            }, {<br>                &quot;system&quot;: &quot;Data&quot;,<br>                &quot;status&quot;: &quot;green&quot;<br>            }, {<br>                &quot;system&quot;: &quot;Tools&quot;,<br>                &quot;status&quot;: &quot;green&quot;<br>            }],<br>            &quot;incidents&quot;: [],<br>            &quot;scheduled&quot;: []<br>        },<br>        &quot;requester&quot;: {<br>            &quot;id&quot;: &quot;*****@users.heroku.com&quot;,<br>            &quot;email&quot;: &quot;*****@******.com&quot;,<br>            &quot;uuid&quot;: &quot;d5b5b77d-6fd4-4af4-a2cc-7eb8c1bf8272&quot;,<br>            &quot;full_name&quot;: &quot;Samar Acharya&quot;,<br>            &quot;deleted_at&quot;: null<br>        }<br>    },<br>    &quot;spend&quot;: 3000.0,<br>    &quot;premium_support&quot;: true,<br>    &quot;surveyable&quot;: &quot;true&quot;,<br>    &quot;boosted&quot;: false,<br>    &quot;origin&quot;: &quot;Support App&quot;,<br>    &quot;sfid&quot;: &quot;5003A00000******&quot;<br>}</pre><p><strong>Deleting ticket by anyone with access to ticket</strong></p><p>If you are sharing tickets with other users, you should be aware that they can just delete your ticket. Note that this is a soft-delete so you can reach out to Heroku contacts and have them undo the soft-delete but still is a hassle.</p><pre>curl &#39;https://support-api.heroku.com/tickets/:ticket_id&#39; -H &#39;authorization: Bearer &lt;another_user_token&gt;&#39; -XDELETE<br>...json output...</pre><p>None of these issues were regarded as a potential security risk by Heroku engineer and I found out recently that the information leak issues I’ve mentioned above are fixed now. I was particularly disappointed by the way the Heroku support acted on this particular issue. I also later found out that help.heroku.com is in scope in their security bounty on <a href="https://bugcrowd.com/heroku">bugcrowd</a>. I do understand that its up to Heroku to decide if a particular report is a security bug or not but in this case, I am amazed that they did not accept it as a security issue (although could be considered low impact).</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d4c45803cdd3" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using JWT and Auth0 for an Elixir/Phoenix web app]]></title>
            <link>https://medium.com/brightergy-engineering/jwt-authentication-based-api-in-phoenix-6c4b51909b19?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/6c4b51909b19</guid>
            <category><![CDATA[json-web-token]]></category>
            <category><![CDATA[phoenix]]></category>
            <category><![CDATA[elixir]]></category>
            <category><![CDATA[authentication]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Mon, 22 Aug 2016 17:49:25 GMT</pubDate>
            <atom:updated>2017-05-03T17:14:25.863Z</atom:updated>
            <content:encoded><![CDATA[<p>TL;DR check out our <a href="https://github.com/techgaun/jwt-phoenix-sample">sample project</a> and <a href="https://github.com/techgaun/jwt-phoenix-sample/commits/master">commits</a> to see it in action.</p><p><a href="https://jwt.io/introduction/">JWT</a> or JSON Web Token is quickly becoming the standard of choice for secure API authentication and information exchange. In this post, I will explore how to build a <a href="https://github.com/techgaun/jwt-phoenix-sample">sample Web application</a> using JWT for authentication of clients.</p><p>Note that there are a variety of authentication stratgies, such as <a href="https://github.com/ueberauth/ueberauth">ueberauth</a>, and you’ll have to pick the right model for your project. Since we offloaded user management to <a href="https://auth0.com/">Auth0,</a> we decided to take another approach which is not necessarily superior but is simple enough for our use case.</p><p>We’re using Elixir 1.3 and Phoenix 1.2.1 for this example. Lets create a new phoenix project.</p><pre>mix phoenix.new jwt_phoenix --no-brunch --no-html</pre><p>Lets add joken in the list of application dependencies and fetch our dependencies first.</p><pre>defp deps do<br>  ...<br>  {:joken, &quot;~&gt; 1.1&quot;}<br>  ...<br>end</pre><p>Then run the <strong><em>mix deps.get</em></strong> command to fetch the dependencies. We’re going to use Auth0 so lets create an appropriate config block in <a href="https://github.com/Brightergy/jwt-phoenix-sample/blob/master/config/config.exs#L25-L32">config.exs</a>.</p><pre>config :jwt_phoenix, :auth0,<br>  app_baseurl: System.get_env(&quot;AUTH0_BASEURL&quot;),<br>  app_id: System.get_env(&quot;AUTH0_APP_ID&quot;),<br>  app_secret: &quot;AUTH0_APP_SECRET&quot;<br>    |&gt; System.get_env<br>    |&gt; Kernel.||(&quot;&quot;)<br>    |&gt; Base.url_decode64<br>    |&gt; elem(1)</pre><p>Once we create an application in Auth0, we have three main configuration items.</p><ul><li>app_baseurl — The base URL or <strong><em>issuer</em></strong> of the JWT tokens</li><li>app_id — the application id or the <strong><em>audience</em></strong> of the JWT tokens</li><li>app_secret — the application secret, used to <strong>sign</strong> the JWT tokens</li></ul><p>We’re using environment variables instead of hard-coding these configuration items in config.exs but you could also use something like <strong><em>prod.secret.exs</em></strong> to inject sensitive data to your deployments. Hard-coding your app_secret is a big security no-no as malicious users can use it to create valid tokens. The other important thing to note is that we have to decode the application secret using <strong><em>Base64.url_decode64/1 </em></strong>as Auth0 base64 encodes its secret.</p><p>Now that the configuration is setup, lets build a <a href="https://github.com/Brightergy/jwt-phoenix-sample/blob/master/web/helpers/jwt_helpers.ex">helper module</a> that consists of functions for verifying and creating tokens on the server-side.</p><pre>defmodule JwtPhoenix.JWTHelpers do<br>  import Joken, except: [verify: 1]<br><br>  @doc &quot;&quot;&quot;<br>  use for future verification, eg. on socket connect<br>  &quot;&quot;&quot;<br>  def verify(jwt) do<br>    verify<br>    |&gt; with_json_module(Poison)<br>    |&gt; with_compact_token(jwt)<br>    |&gt; Joken.verify<br>  end<br><br>  @doc &quot;&quot;&quot;<br>  use for verification via plug<br>  issuer should be our auth0 domain<br>  app_metadata must be present in id_token<br>  &quot;&quot;&quot;<br>  def verify do<br>    %Joken.Token{}<br>    |&gt; with_json_module(Poison)<br>    |&gt; with_signer(hs256(config[:app_secret]))<br>    |&gt; with_validation(&quot;aud&quot;, &amp;(&amp;1 == config[:app_id]))<br>    |&gt; with_validation(&quot;exp&quot;, &amp;(&amp;1 &gt; current_time))<br>    |&gt; with_validation(&quot;iat&quot;, &amp;(&amp;1 &lt;= current_time))<br>    |&gt; with_validation(&quot;iss&quot;, &amp;(&amp;1 == config[:app_baseurl]))<br>  end<br><br>  @doc &quot;&quot;&quot;<br>  Create token from client id and secret<br>  Used for unit tests<br>  &quot;&quot;&quot;<br>  def create_bearer_token(auth_scopes, config_items \\ %{:signer =&gt; :app_secret, :aud =&gt; :app_id}) do<br>    %Joken.Token{claims: auth_scopes}<br>    |&gt; with_json_module(Poison)<br>    |&gt; with_signer(hs256(config[config_items[:signer]]))<br>    |&gt; with_aud(config[config_items[:aud]])<br>    |&gt; with_iat<br>    |&gt; with_iss(config[:app_baseurl])<br>    |&gt; with_exp(current_time + 86_400)<br>    |&gt; sign<br>    |&gt; get_compact<br>  end<br><br>  @doc &quot;&quot;&quot;<br>  Return error message for `on_error`<br>  &quot;&quot;&quot;<br>  def error(conn, _msg) do<br>    {conn, %{:errors =&gt; %{:detail =&gt; &quot;unauthorized&quot;}}}<br>  end<br><br>  defp config, do: Application.get_env(:jwt_phoenix, :auth0)<br>end</pre><p>In the above code, we created the <strong><em>verify/0</em></strong> function which uses the <strong><em>Joken.Plug. </em></strong>The Joken implementation will automatically grab the authorization header and pass it along similar to how our <strong><em>verify/1</em></strong> function works. You can see that we import all the functions but <strong><em>verify/1 </em></strong>from Joken because we’re writing our own <strong><em>verify/1</em></strong>. The <strong><em>verify/1 </em></strong>we’ve written will be used for authorization connections via phoenix channels as the normal Plug flow does not work there. You can see we’ve passed our token through a few <strong><em>with_validation</em></strong> function so that we test our token against each of the expectations such as correct issued time, issuer, expiration time, audience and client secret.</p><p>We have also written a simple <strong><em>create_bearer_token/2 </em></strong>function<strong> </strong>which we can use for creating bearer token for our unit tests. As you might have noticed, we don’t need Auth0 to create tokens; anyone with the right client id and secret can create valid tokens.</p><p>We also have an <strong><em>error/2</em></strong> so that we can send error message for invalid tokens. We will specify both <strong><em>verify/0 </em></strong>and <strong><em>error/2</em></strong> as part of the plug when we add <strong><em>Joken.Plug</em></strong> in the <strong><em>:api</em></strong> pipeline. Lets configure our <a href="https://github.com/Brightergy/jwt-phoenix-sample/blob/master/web/router.ex#L35-L37"><strong><em>web/router.ex</em></strong></a></p><pre>pipeline :api do<br>  plug :accepts, [&quot;json&quot;]<br>  plug Joken.Plug,<br>    verify: &amp;JwtPhoenix.JWTHelpers.verify/0,<br>    on_error: &amp;JwtPhoenix.JWTHelpers.error/2<br>end</pre><p>We’ve added the plug in the pipeline so now all API requests will need a valid JWT in the Authorization header. To see this in action, lets create a controller that will return a simple JSON response <em>{success: true}.</em></p><pre># web/controllers/status_controller.ex<br>defmodule JwtPhoenix.StatusController do<br>  use JwtPhoenix.Web, :controller<br><br>  def index(conn, _params) do<br>    status = %{<br>      success: true<br>    }<br>    render(conn, &quot;status.json&quot;, status: status)<br>  end<br>end</pre><pre># web/views/status_view.ex<br>defmodule JwtPhoenix.StatusView do<br>  use JwtPhoenix.Web, :view<br><br>  def render(&quot;status.json&quot;, %{status: status}) do<br>    status<br>  end<br>end</pre><pre># web/router.ex ; lets add status endpoint in router.ex<br>scope &quot;/api&quot;, JwtPhoenix do<br>  pipe_through :api<br><br>  get &quot;/status&quot;, StatusController, :index<br>end</pre><p>We are ready to access our <em>/api/status</em> endpoint. For now, we will not dive into how we can do this on the frontend side but with Auth0 lock, its fairly easy to setup. Lets see a sample curl request to see this in action.</p><pre>$ curl -D - http://localhost:4000/api/status<br>HTTP/1.1 401 Unauthorized<br>server: Cowboy<br>date: Mon, 22 Aug 2016 17:00:00 GMT<br>content-length: 36<br>cache-control: max-age=0, private, must-revalidate<br>x-request-id: fco562v0k8eleuhr0oeks0i1f2boigcm<br>content-type: application/json; charset=utf-8<br><br>{&quot;errors&quot;:{&quot;detail&quot;:&quot;unauthorized&quot;}}</pre><p>Since we didn’t specify a valid token (same for unauthorized tokens), our API now responds with status <strong><em>401</em></strong>. For authenticated requests, it responds with the appropriate response and http status code <strong><em>200</em></strong>.</p><pre>$ curl -D - http://localhost:4000/api/status -H &quot;Authorization: Bearer &lt;valid_jwt_token&gt;&quot;<br>HTTP/1.1 200 OK<br>server: Cowboy<br>date: Mon, 22 Aug 2016 17:02:55 GMT<br>content-length: 16<br>content-type: application/json; charset=utf-8<br>cache-control: max-age=0, private, must-revalidate<br>x-request-id: stcjk670bdih5n2phgpm3h5nkgmc2dor<br><br>{&quot;success&quot;:true}</pre><p>Now that we’ve got the basics working, we are going to add an authorization feature. While the work we’ve done so far is good enough to check for a valid token, we don’t have a way to limit particular endpoints to particular roles. For simplicity, we are going to add two functions to the router.ex but you could easily extract it to a separate Plug module.</p><pre># web/router.ex<br>@doc &quot;&quot;&quot;<br>Function that will serve as Plug for verifying metadata<br>&quot;&quot;&quot;<br>def check_admin_metadata(conn, opts) do<br>  claims = Map.get(conn.assigns, :joken_claims)<br>  case Map.get(claims, &quot;app_metadata&quot;) do<br>    %{&quot;role&quot; =&gt; &quot;admin&quot;} -&gt;<br>      assign(conn, :admin, true)<br>    _ -&gt;<br>      conn<br>      |&gt; forbidden<br>  end<br>end<br><br>@doc &quot;&quot;&quot;<br>send 403 to client<br>&quot;&quot;&quot;<br>def forbidden(conn) do<br>  msg = %{<br>    errors: %{<br>      details: &quot;forbidden resource&quot;<br>    }<br>  }<br>  conn<br>  |&gt; put_resp_content_type(&quot;application/json&quot;)<br>  |&gt; send_resp(403, Poison.encode!(msg))<br>  |&gt; halt<br>end</pre><pre># create a new pipeline for admin: web/router.ex<br>pipeline :api_admin do<br>  plug :check_admin_metadata<br>end</pre><pre># add a new scope<br>scope &quot;/api/admin&quot;, JwtPhoenix do<br>  pipe_through [:api, :api_admin]<br><br>  get &quot;/&quot;, StatusController, :admin<br>end</pre><p>We just wrote a simple function to check if the user has the <strong><em>admin </em></strong><em>role</em>. Auth0 provides two fields, <strong><em>app_metadata </em></strong>and <strong><em>user_metadata</em></strong>. The former one is only editable by the application so it can be used for managing roles and access. The later is user editable and can be used for things like user preferences. The above example checks the claims has the right app_metadata otherwise it will send a 403 status code. We also put <strong><em>admin: true</em></strong> in conn.assigns so that we can do certain actions later. Finally, we created a new pipeline <strong><em>:api_admin </em></strong>which we will be used under the <strong><em>/api/admin </em></strong>scope. The beauty with Phoenix is that we can pipe_through a list of pipelines giving us flexibility on transforming the <strong><em>%Plug.Conn{}</em></strong> struct. Lets add a function in our <em>StatusController </em>to handle the<em> /api/admin </em>endpoint.</p><pre>def admin(conn, _params) do<br>  status = %{<br>    success: true,<br>    role: &quot;admin&quot;<br>  }<br>  render(conn, &quot;status.json&quot;, status: status)<br>end</pre><p>This is a simplistic example as a real world scenario would be a lot more complex but it serves our purpose of demonstrating the feature. Let’s see how curl responds to two properly signed tokens, one with the admin role and other one without.</p><pre>$ curl -D - http://localhost:4000/api/admin -H &quot;Authorization: Bearer &lt;valid_token_with_no_role&gt;&quot;<br>HTTP/1.1 403 Forbidden<br>server: Cowboy<br>date: Mon, 22 Aug 2016 17:40:01 GMT<br>content-length: 43<br>cache-control: max-age=0, private, must-revalidate<br>x-request-id: 7imahtg2gnpiodeo20q9ku1udtf3jvl4<br>content-type: application/json; charset=utf-8<br><br>{&quot;errors&quot;:{&quot;details&quot;:&quot;forbidden resource&quot;}}</pre><pre>$ curl -D - http://localhost:4000/api/admin -H &quot;Authorization: Bearer &lt;valid_token_with_admin_role&gt;&quot;<br>HTTP/1.1 200 OK<br>server: Cowboy<br>date: Mon, 22 Aug 2016 17:40:26 GMT<br>content-length: 31<br>content-type: application/json; charset=utf-8<br>cache-control: max-age=0, private, must-revalidate<br>x-request-id: fcemumhu2kakr8e083f2u1n65ntt173h<br><br>{&quot;success&quot;:true,&quot;role&quot;:&quot;admin&quot;}</pre><p>The <strong><em>/api/status </em></strong>endpoint still works for tokens without admin role but the <strong><em>/api/admin </em></strong>sends 403 status code.</p><p>This concludes our JWT and Auth0 demonstration. Hopefully now you know how to setup your web app for JWT based authentication and authorization. Some parts of this article is Auth0 specific but the flow should generally work with any provider. There’s so much more we could cover such as JWT token invalidation using short lived expiration or refresh tokens or token blacklist, but we’ll leave thatfor another day!</p><p><em>The sample application we built is hosted on </em><a href="https://github.com/Brightergy/jwt-phoenix-sample"><em>GitHub</em></a><em>. You can check the </em><a href="https://github.com/ueberauth/"><em>Ueberauth</em></a><em> organization for authentication systems such as Guardian and Ueberauth and various strategies supported by Ueberauth. Check out </em><a href="https://github.com/bryanjos/joken"><em>joken</em></a><em> for all your JWT needs.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6c4b51909b19" width="1" height="1" alt=""><hr><p><a href="https://medium.com/brightergy-engineering/jwt-authentication-based-api-in-phoenix-6c4b51909b19">Using JWT and Auth0 for an Elixir/Phoenix web app</a> was originally published in <a href="https://medium.com/brightergy-engineering">Brightergy Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Monitoring Erlang Runtime Statistics]]></title>
            <link>https://medium.com/brightergy-engineering/monitoring-erlang-runtime-statistics-59645e362dc8?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/59645e362dc8</guid>
            <category><![CDATA[elixir]]></category>
            <category><![CDATA[phoenix-framework]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[erlang]]></category>
            <category><![CDATA[dashboard]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Mon, 15 Aug 2016 20:52:11 GMT</pubDate>
            <atom:updated>2016-08-15T23:39:12.095Z</atom:updated>
            <content:encoded><![CDATA[<p>So the Cloud-based Erlang application you wrote over the weekend is up and running but all of a sudden, it goes down. I can personally attest it is not a good feeling. You wish you had been able to predict the failure before it happened. In this post, I will show you how we monitor Erlang runtime statistics and send our metrics to our <a href="https://www.geckoboard.com/">Geckoboard</a>, our KPI dashboard where we house all of our application statistics in a single place.</p><p>I will use <a href="http://phoenixframework.org/">Phoenix Framework</a> which is a popular web framework written in <a href="https://elixir-lang.org">Elixir</a> which leverages the Erlang VM but provides beautiful syntax and other cool features to work with. However, this can be easily adapted for another Elixir or Erlang framework. Of course, runtime statistics is just one of the various metrics you would want to measure. Over the past week, we have been able to gather and measure the various important metrics of our platform such as response times, response status codes, database metrics, load average, etc.</p><p>First, lets put <a href="https://hex.pm/packages/ex_erlstats">ExErlstats</a> in the list of dependencies for our application by editing <em>mix.exs</em>:</p><pre>defp deps do<br>  {:ex_erlstats, &quot;~&gt; 0.1&quot;}<br>end</pre><p>Run <em>mix deps.get </em>and we are set to use ExErlstats which is just a simple wrapper to get Erlang VM statistics in Elixir.</p><p>Now, lets create a controller to handle the status check:</p><pre># web/controllers/status_check_controller.ex<br>defmodule MyApp.StatusCheckController do<br> use MyApp.Web, :controller<br>end</pre><p>Lets add couple of index functions with pattern matching and configure router.ex so that we can hit the status check endpoint.</p><pre>def index(conn, %{&quot;gecko&quot; =&gt; &quot;true&quot;, &quot;memory_stats&quot; =&gt; &quot;true&quot;}) do<br> msg = ExErlstats.memory<br> |&gt; Stream.filter(fn {_k, v} -&gt;<br>   valid_stat?(v)<br> end)<br> |&gt; Enum.map(fn {k, v} -&gt;<br>   %{<br>     &quot;title&quot;: %{<br>     &quot;text&quot;: atom_to_str(k)<br>     },<br>     &quot;description&quot;: mem_normalize(v)<br>   }<br> end)<br> response(conn, msg)<br>end</pre><pre>def index(conn, %{&quot;gecko&quot; =&gt; &quot;true&quot;, &quot;sysinfo&quot; =&gt; &quot;true&quot;}) do<br> msg = ExErlstats.system_info<br> |&gt; Stream.filter(fn {_k, v} -&gt;<br>   valid_stat?(v)<br> end)<br> |&gt; Enum.map(fn {k, v} -&gt;<br>   %{<br>     &quot;title&quot;: %{<br>     &quot;text&quot;: atom_to_str(k)<br>     },<br>     &quot;description&quot;: &quot;#{v}&quot;<br>   }<br> end)<br> response(conn, msg)<br>end</pre><pre>def index(conn, %{&quot;gecko&quot; =&gt; &quot;true&quot;, &quot;erl_stats&quot; =&gt; &quot;true&quot;}) do<br> msg = ExErlstats.stats<br> |&gt; Stream.filter(fn {_k, v} -&gt;<br>   valid_stat?(v)<br> end)<br> |&gt; Enum.map(fn {k, v} -&gt;<br>   %{<br>     &quot;title&quot;: %{<br>     &quot;text&quot;: atom_to_str(k)<br>     },<br>     &quot;description&quot;: &quot;#{v}&quot;<br>   }<br> end)<br> response(conn, msg)<br>end</pre><pre>def index(conn, _params) do<br> time_start = :os.system_time(:milli_seconds)<br> time_end = :os.system_time(:milli_seconds)<br> time_diff = &quot;#{time_end — time_start}ms&quot;<br> success(conn, time_diff)<br>end</pre><pre>defp success(conn, time_diff) do<br> msg = %{<br>   success: true,<br>   msg: &quot;ok&quot;,<br>   time: time_diff<br> }<br> response(conn, msg)<br>end</pre><pre>defp response(conn, msg, status \\ :ok) do<br> conn<br> |&gt; put_status(status)<br> |&gt; put_resp_content_type(&quot;application/json&quot;)<br> |&gt; render(MyApp.StatusCheckView, &quot;index.json&quot;, msg: msg)<br>end</pre><pre>defp mem_normalize(v) when is_integer(v), do: _mem_normalize(v)<br>defp mem_normalize(v), do: v<br>defp _mem_normalize(v) when v &lt; 1024, do: &quot;#{v} bytes&quot;<br>defp _mem_normalize(v) when v &lt; 1048576, do: &quot;#{Float.round(v / 1024, 2)} KB&quot;<br>defp _mem_normalize(v) when v &lt; 1073741824, do: &quot;#{Float.round(v / 1048576, 2)} MB&quot;<br>defp _mem_normalize(v), do: &quot;#{Float.round(v / 1073741824, 2)} GB&quot;<br>defp valid_stat?(v), do: is_bitstring(v) || is_integer(v) || is_float(v)<br>defp atom_to_str(v) when is_atom(v), do: v |&gt; Atom.to_string |&gt; String.capitalize |&gt; String.replace(&quot;_&quot;, &quot; &quot;)<br>defp atom_to_str(v), do: v</pre><p>We need to create a simple view:</p><pre># web/views/status_check_view.ex<br>defmodule MyApp.StatusCheckView do<br>  use MyApp.Web, :view<br>  def render(&quot;index.json&quot;, %{msg: msg}) do<br>    msg<br>  end<br>end</pre><p>And the router.ex configuration:</p><pre># web/router.ex<br>scope &quot;/&quot;, MyApp do<br>  get &quot;/status&quot;, StatusCheckController, :index<br>end</pre><p>What we did just now is setup a <em>/status</em> endpoint where we can pass a couple of parameters. I used params pattern matching to generate the format Geckoboard’s List widget expects.</p><pre>GET /status?gecko=true&amp;sysinfo=true<br>[<br> {<br>  &quot;title&quot;: {<br>   &quot;text&quot;: &quot;Otp release&quot;<br>  },<br>  &quot;description&quot;: &quot;18&quot;<br> },<br> {<br>  &quot;title&quot;: {<br>   &quot;text&quot;: &quot;Port count&quot;<br>  },<br>  &quot;description&quot;: &quot;47&quot;<br> },<br> {<br>  &quot;title&quot;: {<br>   &quot;text&quot;: &quot;Port limit&quot;<br>  },<br>  &quot;description&quot;: &quot;65536&quot;<br> },<br> {<br>  &quot;title&quot;: {<br>   &quot;text&quot;: &quot;Process count&quot;<br>  },<br>  &quot;description&quot;: &quot;381&quot;<br> },<br> {<br>  &quot;title&quot;: {<br>   &quot;text&quot;: &quot;Process limit&quot;<br>  },<br>  &quot;description&quot;: &quot;262144&quot;<br> },<br> {<br>  &quot;title&quot;: {<br>   &quot;text&quot;: &quot;Schedulers&quot;<br>  },<br>  &quot;description&quot;: &quot;8&quot;<br> },<br> {<br>  &quot;title&quot;: {<br>   &quot;text&quot;: &quot;Schedulers online&quot;<br>  },<br>  &quot;description&quot;: &quot;8&quot;<br> },<br> {<br>  &quot;title&quot;: {<br>   &quot;text&quot;: &quot;Version&quot;<br>  },<br>  &quot;description&quot;: &quot;7.3&quot;<br> }<br>]</pre><p>I’m not going to display the other possible json results but the endpoints to hit would be:</p><pre>GET /status?gecko=true&amp;memory=true<br>GET /status?gecko=true&amp;erl_stats=true</pre><p>These all return a JSON response that <a href="https://developer-custom.geckoboard.com/#list">Geckoboard’s List widget</a> expects. You will also notice that the code example for status controller also has a default status check, which we use for a simple check that our API is up.</p><pre>GET /status<br>{<br> &quot;time&quot;: &quot;0ms&quot;,<br> &quot;success&quot;: true,<br> &quot;msg&quot;: &quot;ok&quot;<br>}</pre><p>And, this is what’s displayed on Geckoboard:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/694/1*c38ZA-FQdIeyTc-Tm4RoJA.png" /><figcaption>Erlang VM Statistics on Geckoboard</figcaption></figure><p>While we use Geckoboard, you can easily adapt the above examples to send to the monitoring dashboard of your choice.</p><p><em>If you are looking to use Geckoboard with Elixir, check out </em><a href="https://github.com/Brightergy/ex_gecko"><em>ExGecko</em></a><em>. ExGecko supports </em><a href="https://developer.geckoboard.com/"><em>Geckoboard’s Dataset</em></a><em> and </em><a href="https://developer-custom.geckoboard.com/#push-overview"><em>Push API</em></a><em>s and also provides built-in adapters for feeding Papertrail, Runscope, Heroku server, db and db backup logs to Geckoboard.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=59645e362dc8" width="1" height="1" alt=""><hr><p><a href="https://medium.com/brightergy-engineering/monitoring-erlang-runtime-statistics-59645e362dc8">Monitoring Erlang Runtime Statistics</a> was originally published in <a href="https://medium.com/brightergy-engineering">Brightergy Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Install and Configure InfluxDB on Amazon Linux]]></title>
            <link>https://medium.com/brightergy-engineering/install-and-configure-influxdb-on-amazon-linux-b0c82b38ba2c?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/b0c82b38ba2c</guid>
            <category><![CDATA[ec2]]></category>
            <category><![CDATA[elix]]></category>
            <category><![CDATA[influxdb]]></category>
            <category><![CDATA[amazon-web-services]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Thu, 28 Jul 2016 14:48:48 GMT</pubDate>
            <atom:updated>2016-07-28T14:48:48.784Z</atom:updated>
            <content:encoded><![CDATA[<p>TL;DR You can just follow this <a href="https://gist.github.com/techgaun/c2802fb94ba36852690b74ea175faf09">gist</a> with adaptation to your use-case.</p><p>We recently started using <a href="https://influxdata.com/">InfluxDB</a> as our time-series database for our IoT platform. InfluxDB allows us to store large amount of time-series data collected from various type of devices such as power meters and solar generation monitors. We <a href="https://medium.com/@brighterlink/engine-humming-new-milestone-names-53c25ec0b36a#.4p10zobxu">reviewed</a> couple of time series databases such as Prometheus, OpenTSDB and Graphite and finally settled with InfluxDB as it is best suited for use-case. While InfluxDB does not have many features and there are certain shortcomings with it, we see it as a promising young time-series database. We had tried InfluxCloud but we had certain issues such as lack of control and visibility of the InfluxDB cluster servers. Thus, we decided to run our InfluxDB in house and it turned out to be a straightforward process.</p><p>In this post, we will see how to install and configure a single node InfluxDB service using AWS with an optimal configuration. For our setup, we will use two volumes, one for the WAL (write-ahead logging) and one for the InfluxDB data. The general idea is to use a higher IOPS, lower size volume for the WAL and a larger sized, but slower IOPS volume for the data.</p><h4>Pre-install</h4><p>From your AWS console or AWS CLI, create a new security group called <strong><em>influxdb-sg</em></strong> and allow inbound connection to TCP port <strong><em>8086</em></strong> from the appropriate source. Launch an EC2 instance using the <strong><em>Amazon Linux AMI</em></strong> and your desired instance size with the security group we created earlier. Since we’re going to store the data and the WAL in separate volumes, you can either choose to add these volumes as part of your launch process or later on. Or you can just choose not to use volumes based on your needs.</p><p>Once the instance is up, create a SSH session and run the following commands:</p><pre>yum update<br>mkdir -p /opt/influx/{wal,data,ssl}</pre><h4><strong>WAL and Data Volumes Configuration</strong></h4><p>Make sure your device paths are correct. If you are not using additional volumes, you can skip this step.</p><pre>mkfs.ext4 /dev/sdb<br>mkfs.ext4 /dev/sdc<br>mount /dev/sdb /opt/influx/wal/<br>mount /dev/sdc /opt/influx/data/</pre><h4>Installation</h4><p>At the I wrote this, the latest stable version of <strong>InfluxDB</strong> was <strong>0.13</strong> which we are going to use.</p><pre>wget https://dl.influxdata.com/influxdb/releases/influxdb-0.13.0.x86_64.rpm<br>yum localinstall influxdb-0.13.0.x86_64.rpm<br>chkconfig influxdb on</pre><h4>Configuration</h4><p><strong>SSL</strong></p><p>Its recommended to use SSL with InfluxDB whenever you can. InfluxDB requires you to concatenate your private key and your certificate bundle in a single file.</p><pre>cat privkey.pem mysite.crt mysite-ca-bundle.crt &gt; bundle.pem<br>scp bundle.pem root@myhost:/tmp</pre><p>Now, on the influx host, you can update the InfluxDB configuration. We will also update the meta, data and WAL configurations down below.</p><pre>mv /tmp/bundle.pem /opt/influx/ssl/<br>chown -R influxdb:influxdb /opt/influx/<br>cp /etc/influxdb/influxdb.conf{,-bak}<br>sed -i s./var/lib/influxdb/meta./opt/influx/data/meta. /etc/influxdb/influxdb.conf<br>sed -i s./var/lib/influxdb/data./opt/influx/data/data. /etc/influxdb/influxdb.conf<br>sed -i s./var/lib/influxdb/wal./opt/influx/wal. /etc/influxdb/influxdb.conf<br>sed -i s,/etc/ssl/influxdb.pem,/opt/influx/ssl/bundle.pem, /etc/influxdb/influxdb.conf<br>sed -i &quot;s/https-enabled = false/https-enabled = true/&quot; /etc/influxdb/influxdb.conf</pre><h4>Authentication</h4><p>Now that we have InfluxDB ready to be run, we want to enable some form of authentication. Since we’re going to use http based data in-and-out (which is the default mechanism in InfluxDB), we need to enable authentication. We first need to create users before we can enable authentication and hence, we need to connect to the InfluxDB instance with authentication disabled (authentication is turned off by default).</p><pre>influx<br>&gt; create user superadmin with password &#39;my_password&#39; with all privileges<br>&gt; create user nonadmin with password &#39;na_password&#39;<br>&gt; grant all on tsdb_stage to nonadmin<br>&gt; grant READ on tsdb_prod to nonadmin<br>&gt; grant WRITE on tsdb_dev to nonadmin</pre><p>We just created two users, an admin user and a non-admin user. Now, we can enable the authentication and we should be ready to go with our newly setup InfluxDB instance.</p><pre>/etc/init.d/influxdb stop<br>sed -i &quot;s/auth-enabled = false/auth-enabled = true/&quot; /etc/influxdb/influxdb.conf<br>/etc/init.d/influxdb start</pre><p>The single node InfluxDB is ready now. If you are using Elixir, you can use the nice <a href="https://github.com/mneudert/instream">Instream</a> package to interact with InfluxDB. If you have need help porting data from one InfluxDB instance to another, you may find <a href="https://github.com/Brightergy/influx-copy">influx-copy</a> helpful.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b0c82b38ba2c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/brightergy-engineering/install-and-configure-influxdb-on-amazon-linux-b0c82b38ba2c">Install and Configure InfluxDB on Amazon Linux</a> was originally published in <a href="https://medium.com/brightergy-engineering">Brightergy Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Writing a CLI app in Elixir]]></title>
            <link>https://medium.com/brightergy-engineering/writing-a-cli-app-in-elixir-b308291e9f85?source=rss-5144571fc87b------2</link>
            <guid isPermaLink="false">https://medium.com/p/b308291e9f85</guid>
            <category><![CDATA[elixir]]></category>
            <category><![CDATA[erlang]]></category>
            <category><![CDATA[influxdb]]></category>
            <category><![CDATA[command-line]]></category>
            <category><![CDATA[phoenix]]></category>
            <dc:creator><![CDATA[Samar Acharya]]></dc:creator>
            <pubDate>Tue, 26 Jul 2016 22:39:00 GMT</pubDate>
            <atom:updated>2016-07-26T22:39:00.038Z</atom:updated>
            <content:encoded><![CDATA[<p>The journey with Elixir has been pleasant so far and with the power of concurrency of Erlang (which we’re yet to leverage to the fullest) and sweet features of Elixir like pattern matching, guards and extensibility, we are very happy with the choice. So far, we had been mostly writing Phoenix based frontend and backend APIs and couple of client SDKs and recently, I got an opportunity to write a command line tool for copying data from one InfluxDB to another InfluxDB database. While I could have written it in a variety of other languages, this was a perfect opportunity to write my first command line application in Elixir.</p><p>Erlang supports short running programs via the <a href="http://erlang.org/doc/man/escript.html">escript</a> tool. All you have to do is write your application the way Erlang scripting support expects. Also, note that you only need to have Erlang installed for escript to run even though you write an escript in Elixir as Elixir is embedded as part of the escript. We’ll dive into writing a CLI tool soon but lets first dissect the executable created by escript when you run a <strong><em>mix escript.build</em></strong>.</p><pre>head -n3 influx_copy <br>#! /usr/bin/env escript<br>%% <br>%%! -escript main influx_copy_escript</pre><p>As you can see above, the executable created is simple and has a shebang line just like bash or python scripts. The second line consists of an optional directive to the Emacs editor which causes Emacs to enter the major mode for editing Erlang source files. The escripts built via Elixir do not contain the directive as seen above. The third line (or second if the second directive line is not present) specifies any arguments to be passed to the emulator. In the output above, we are overriding the default behavior of escript which is to invoke <em>main/1 </em>function in the module with the same name as the basename of the escript. Don’t worry! This is all handled by Elixir so all you have to specify is your <em>main_module</em> in your <em>mix.exs </em>file.</p><p>After the headers, the rest of the escript can either contain the plain Erlang code or precompiled beam code which is how it works with Elixir. If you look at the output executable, it will consist of all the beam code in a single executable. And, since all the necessary dependencies are compiled and included, we don’t explicitly need Elixir to be installed for escripts written in Elixir.</p><p>Now, lets quickly dive into the process of writing escript in Elixir. You can check out the <a href="https://github.com/Brightergy/influx-copy/commits/master">influx-copy commits</a> which I’ve tried as much to code in small incremental steps.</p><pre>mix new influx-copy — module InfluxCopy — app influx_copy<br>cd influx-copy<br>mix test</pre><p>The <em>influx-copy</em> tool is a simple generic tool that reads from one InfluxDB source and writes to another destination with ability to select fields and transform values of particular tags. The use case was that we had to copy some data from staging to production for one of our real-time monitors but their tag values were different in different environments.</p><p>The Erlang escript expects the main module to have a <em>main/1</em> function which is what is invoked when the escript is run.</p><p>Update your <a href="https://github.com/Brightergy/influx-copy/blob/d8b79e4ba346f36214d8e8a7b12f06190c256714/mix.exs"><em>mix.exs</em></a> file to contain the following information</p><pre>def project do<br>  [app: :influx_copy,<br>   version: “0.1.0”,<br>   elixir: “~&gt; 1.3”,<br>   escript: escript,<br>   build_embedded: Mix.env == :prod,<br>   start_permanent: Mix.env == :prod,<br>   deps: deps()]<br>end<br>def escript, do: [main_module: InfluxCopy]</pre><p>As you can see above, whatever main_module you specify is supposed to have the <em>main/1</em> function defined.</p><pre>defmodule InfluxCopy do<br>  def main(args \\ []) do<br>    {opts, _, _} = OptionParser.parse(args,<br>      switches: [start: :integer, end: :integer, src: :string, dest: :string, update_tags: :string],<br>      aliases: [s: :start, e: :end, S: :src, d: :dest, u: :update_tags]<br>      )<br>    IO.puts “Not implemented yet”<br>  end<br>end</pre><p>Now, your simple CLI app is ready to be compiled and used although it does not do anything.</p><pre>$ mix escript.build<br>Generated influx_copy app<br>Generated escript influx_copy with MIX_ENV=dev</pre><pre>$ ./influx_copy<br>Not implemented yet</pre><p>Elixir has a nice module called <a href="http://elixir-lang.org/docs/stable/elixir/OptionParser.html">OptionParser</a> that provides helpful functions to parse command line arguments which is quite useful while writing command line applications in Elixir. The influx-copy script does not yet include a helpful help message but it should be fairly simple to put.</p><p>I’d imagine adding an extra switch <em>help</em> and then based on truthiness of this argument, you can show help message instead.</p><p>If you are interested in the script I wrote and how I proceeded further, you can check out <a href="https://github.com/Brightergy/influx-copy">influx-copy</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b308291e9f85" width="1" height="1" alt=""><hr><p><a href="https://medium.com/brightergy-engineering/writing-a-cli-app-in-elixir-b308291e9f85">Writing a CLI app in Elixir</a> was originally published in <a href="https://medium.com/brightergy-engineering">Brightergy Engineering</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>