{"id":21936,"date":"2021-09-30T18:51:00","date_gmt":"2021-09-30T18:51:00","guid":{"rendered":"https:\/\/www.askpython.com\/?p=21936"},"modified":"2021-10-01T18:51:31","modified_gmt":"2021-10-01T18:51:31","slug":"opencv-credit-card-reader","status":"publish","type":"post","link":"https:\/\/www.askpython.com\/python\/examples\/opencv-credit-card-reader","title":{"rendered":"Credit Card Reader in Python using OpenCV"},"content":{"rendered":"\n<p>The purpose of this tutorial is to help you build a credit card reader with OpenCV and machine learning techniques to identify the card number and the card type.<\/p>\n\n\n\n<p>Let us get started!<\/p>\n\n\n\n<p><strong><em>Also read: <a href=\"https:\/\/www.askpython.com\/python-modules\/read-images-in-python-opencv\" data-type=\"post\" data-id=\"8838\">How to read Images in Python using OpenCV?<\/a><\/em><\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Introduction to OCR<\/h2>\n\n\n\n<p>We have always seen <strong>Optical character recognition<\/strong> being used a lot in machine learning and deep learning. One of many such applications includes the identification and reading of credit cards and the card number.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"455\" height=\"329\" src=\"https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/Credit_Card_Reader_Demonstration.jpg\" alt=\"Credit Card Reader Demonstration\" class=\"wp-image-21939\" srcset=\"https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/Credit_Card_Reader_Demonstration.jpg 455w, https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/Credit_Card_Reader_Demonstration-300x217.jpg 300w\" sizes=\"auto, (max-width: 455px) 100vw, 455px\" \/><figcaption>Credit Card Reader Demonstration<\/figcaption><\/figure><\/div>\n\n\n\n<p>The question that might come to your mind is Why? So this application could be of great help to banks and other financial institutions to digitally recognize the card numbers and type of card.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Implementation of a Credit Card Reader in Python<\/h2>\n\n\n\n<p>Now that we have understood the concept and what we are going to build by end of this tutorial.<\/p>\n\n\n\n<p>Let&#8217;s start building the project step by step.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 1: Importing Modules<\/h3>\n\n\n\n<p>We&#8217;ll be working with <meta charset=\"utf-8\"><a href=\"https:\/\/www.askpython.com\/python\/numpy-trigonometric-functions\" data-type=\"post\" data-id=\"14347\">numpy<\/a> and <a href=\"https:\/\/www.askpython.com\/python-modules\/matplotlib\/python-matplotlib\" data-type=\"post\" data-id=\"3182\">matplotlib<\/a> along with the openCV module in this case. <\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nimport cv2\nimport imutils\nimport argparse\nimport numpy as np\nfrom imutils import contours\nfrom matplotlib import pyplot as plt\n<\/pre><\/div>\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 2: Assign Card type<\/h3>\n\n\n\n<p>The card type is assigned according to the first digit of the card number. The same is displayed below.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nFIRST_NUMBER = {\n    &quot;3&quot;: &quot;American Express&quot;,\n    &quot;4&quot;: &quot;Visa&quot;,\n    &quot;5&quot;: &quot;MasterCard&quot;,\n    &quot;6&quot;: &quot;Discover Card&quot;}\n<\/pre><\/div>\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 3: Loading and pre-processing of reference Image<\/h3>\n\n\n\n<p>In order to read the reference OCR image, we make use of the <a href=\"https:\/\/www.askpython.com\/python-modules\/python-imread-opencv\"><code>imread<\/code><\/a> function. The reference image contains the digits 0-9 in the OCR-A font which can be later on used to perform matching later in the pipeline.<\/p>\n\n\n\n<p>The pre-processing of the image includes converting it to a gray image and then thresholding + inverting the image to get the binary inverted image.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nref = cv2.imread(&#039;ocr_a_reference.png&#039;)\nref = cv2.cvtColor(ref, cv2.COLOR_BGR2GRAY)\nref = cv2.threshold(ref, 10, 255, cv2.THRESH_BINARY_INV)&#x5B;1]\n<\/pre><\/div>\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 4: Detecting Contours<\/h3>\n\n\n\n<p>In this step, we find the contours present in the pre-processed image and then store the returned contour information. Next, we sort the contours from left-to-right as well as initialize a dictionary, digits, which map the digit name to the region of interest.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nrefCnts = cv2.findContours(ref.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)\nrefCnts = imutils.grab_contours(refCnts)\nrefCnts = contours.sort_contours(refCnts, method=&quot;left-to-right&quot;)&#x5B;0]\ndigits = {}\n<\/pre><\/div>\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 5: Creating Bounding Boxes around Digits<\/h3>\n\n\n\n<p>Now in this step, we loop through the image contours obtained in the previous step where each value holds the digit\/number along with the contour information. We further compute a bounding box for each contour and store the (x, y) coordinates along with the height and width of the box computed.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nfor (i, c) in enumerate(refCnts):\n    (x, y, w, h) = cv2.boundingRect(c)\n    roi = ref&#x5B;y:y + h, x:x + w]\n    roi = cv2.resize(roi, (57, 88))\n    digits&#x5B;i] = roi\n    \nrectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 3))\nsqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))\n<\/pre><\/div>\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 6: Loading and Pre-processing of the Credit card Image<\/h3>\n\n\n\n<p>In this step, we load our photo of the credit card and then <code>resize<\/code> the same to the width of 300 in order to maintain the aspect ratio.<\/p>\n\n\n\n<p>This step is followed by converting the image to <code>grayscale<\/code>. After this, we perform <code>morphological operations<\/code> on the grayscale image.<\/p>\n\n\n\n<p>The next step is to compute a <strong><code>Scharr gradient<\/code><\/strong> and store the result as <code>gradX<\/code>. Then we calculate the absolute value of the gradX array stored. We aim to scale all the values in the range of <strong><code>0-255<\/code><\/strong>.<\/p>\n\n\n\n<p>Now this normalizing of values take place by computing the minimum and maximum value of the gradX and form an equation to achieve <strong><code>min-max normalization<\/code><\/strong>.<\/p>\n\n\n\n<p>Finally, we find the <code>contours<\/code> and store them in a list and initialize a list to hold the digit group locations. Then loop through the contours the same way we did for the reference image in <code>step 5<\/code>.<\/p>\n\n\n\n<p>Next, we\u2019ll <strong>sort the groupings<\/strong> from <code>left to right<\/code> and initialize a list for the credit card digits.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nimage = cv2.imread(&#039;credit_card_03.png&#039;)\nimage = imutils.resize(image, width=300)\ngray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\ntophat = cv2.morphologyEx(gray, cv2.MORPH_TOPHAT, rectKernel)\n\ngradX = cv2.Sobel(tophat, ddepth=cv2.CV_32F, dx=1, dy=0, ksize=-1)\ngradX = np.absolute(gradX)\n(minVal, maxVal) = (np.min(gradX), np.max(gradX))\ngradX = (255 * ((gradX - minVal) \/ (maxVal - minVal)))\ngradX = gradX.astype(&quot;uint8&quot;)\n\ngradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel)\nthresh = cv2.threshold(gradX, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)&#x5B;1]\n\nthresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)\n\ncnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,\n\tcv2.CHAIN_APPROX_SIMPLE)\ncnts = imutils.grab_contours(cnts)\nlocs = &#x5B;]\n\nfor (i, c) in enumerate(cnts):\n\t(x, y, w, h) = cv2.boundingRect(c)\n\tar = w \/ float(h)\n\tif ar &gt; 2.5 and ar &lt; 4.0:\n\t\tif (w &gt; 40 and w &lt; 55) and (h &gt; 10 and h &lt; 20):\n\t\t\tlocs.append((x, y, w, h))\n\nlocs = sorted(locs, key=lambda x:x&#x5B;0])\noutput = &#x5B;]\n<\/pre><\/div>\n\n\n<p>Now that we know where each group of four digits is, let\u2019s loop through the four sorted groupings and determine the digits therein. The looping includes <strong>thresholding, detecting contours, and template matching<\/strong> as well.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nfor (i, (gX, gY, gW, gH)) in enumerate(locs):\n    groupOutput = &#x5B;]\n    group = gray&#x5B;gY - 5:gY + gH + 5, gX - 5:gX + gW + 5]\n    group = cv2.threshold(group, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)&#x5B;1]\n    digitCnts = cv2.findContours(group.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n    digitCnts = imutils.grab_contours(digitCnts)\n    digitCnts = contours.sort_contours(digitCnts, method=&quot;left-to-right&quot;)&#x5B;0]\n    for c in digitCnts:\n        (x, y, w, h) = cv2.boundingRect(c)\n        roi = group&#x5B;y:y + h, x:x + w]\n        roi = cv2.resize(roi, (57, 88))\n        scores = &#x5B;]\n        for (digit, digitROI) in digits.items():\n            result = cv2.matchTemplate(roi, digitROI, cv2.TM_CCOEFF)\n            (_, score, _, _) = cv2.minMaxLoc(result)  \n            scores.append(score)\n        groupOutput.append(str(np.argmax(scores)))\n    cv2.rectangle(image, (gX - 5, gY - 5),\n        (gX + gW + 5, gY + gH + 5), (0, 0, 255), 2)\n    cv2.putText(image, &quot;&quot;.join(groupOutput), (gX, gY - 15),\n        cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 255), 2)\n    output.extend(groupOutput)\n<\/pre><\/div>\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 7: Displaying the Final Results<\/h3>\n\n\n\n<p>The code below will display the final Card Type, Card Number, and the OCR applied image.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nprint(&quot;Credit Card Type: {}&quot;.format(FIRST_NUMBER&#x5B;output&#x5B;0]]))\nprint(&quot;Credit Card #: {}&quot;.format(&quot;&quot;.join(output)))\n\nplt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))\nplt.title(&#039;Image&#039;); plt.show()\n<\/pre><\/div>\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">The Final Code<\/h2>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; gutter: true; title: ; notranslate\" title=\"\">\nimport cv2\nimport imutils\nimport argparse\nimport numpy as np\nfrom imutils import contours\nfrom matplotlib import pyplot as plt\n\nFIRST_NUMBER = {\n    &quot;3&quot;: &quot;American Express&quot;,\n    &quot;4&quot;: &quot;Visa&quot;,\n    &quot;5&quot;: &quot;MasterCard&quot;,\n    &quot;6&quot;: &quot;Discover Card&quot;}\n\nref = cv2.imread(&#039;ocr_a_reference.png&#039;)\nref = cv2.cvtColor(ref, cv2.COLOR_BGR2GRAY)\nref = cv2.threshold(ref, 10, 255, cv2.THRESH_BINARY_INV)&#x5B;1]\n\nrefCnts = cv2.findContours(ref.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)\nrefCnts = imutils.grab_contours(refCnts)\nrefCnts = contours.sort_contours(refCnts, method=&quot;left-to-right&quot;)&#x5B;0]\ndigits = {}\n\nfor (i, c) in enumerate(refCnts):\n    (x, y, w, h) = cv2.boundingRect(c)\n    roi = ref&#x5B;y:y + h, x:x + w]\n    roi = cv2.resize(roi, (57, 88))\n    digits&#x5B;i] = roi\n    \nrectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 3))\nsqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))\n\nimage = cv2.imread(&#039;credit_card_03.png&#039;)\nimage = imutils.resize(image, width=300)\ngray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\ntophat = cv2.morphologyEx(gray, cv2.MORPH_TOPHAT, rectKernel)\n\ngradX = cv2.Sobel(tophat, ddepth=cv2.CV_32F, dx=1, dy=0, ksize=-1)\ngradX = np.absolute(gradX)\n(minVal, maxVal) = (np.min(gradX), np.max(gradX))\ngradX = (255 * ((gradX - minVal) \/ (maxVal - minVal)))\ngradX = gradX.astype(&quot;uint8&quot;)\n\ngradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel)\nthresh = cv2.threshold(gradX, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)&#x5B;1]\n\nthresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)\n\ncnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,\n\tcv2.CHAIN_APPROX_SIMPLE)\ncnts = imutils.grab_contours(cnts)\nlocs = &#x5B;]\n\nfor (i, c) in enumerate(cnts):\n\t(x, y, w, h) = cv2.boundingRect(c)\n\tar = w \/ float(h)\n\tif ar &gt; 2.5 and ar &lt; 4.0:\n\t\tif (w &gt; 40 and w &lt; 55) and (h &gt; 10 and h &lt; 20):\n\t\t\tlocs.append((x, y, w, h))\n\nlocs = sorted(locs, key=lambda x:x&#x5B;0])\noutput = &#x5B;]\n\nfor (i, (gX, gY, gW, gH)) in enumerate(locs):\n    groupOutput = &#x5B;]\n    group = gray&#x5B;gY - 5:gY + gH + 5, gX - 5:gX + gW + 5]\n    group = cv2.threshold(group, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)&#x5B;1]\n    digitCnts = cv2.findContours(group.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n    digitCnts = imutils.grab_contours(digitCnts)\n    digitCnts = contours.sort_contours(digitCnts, method=&quot;left-to-right&quot;)&#x5B;0]\n    for c in digitCnts:\n        (x, y, w, h) = cv2.boundingRect(c)\n        roi = group&#x5B;y:y + h, x:x + w]\n        roi = cv2.resize(roi, (57, 88))\n        scores = &#x5B;]\n        for (digit, digitROI) in digits.items():\n            result = cv2.matchTemplate(roi, digitROI, cv2.TM_CCOEFF)\n            (_, score, _, _) = cv2.minMaxLoc(result)  \n            scores.append(score)\n        groupOutput.append(str(np.argmax(scores)))\n    cv2.rectangle(image, (gX - 5, gY - 5),\n        (gX + gW + 5, gY + gH + 5), (0, 0, 255), 2)\n    cv2.putText(image, &quot;&quot;.join(groupOutput), (gX, gY - 15),\n        cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 255), 2)\n    output.extend(groupOutput)\n    \nprint(&quot;Credit Card Type: {}&quot;.format(FIRST_NUMBER&#x5B;output&#x5B;0]]))\nprint(&quot;Credit Card #: {}&quot;.format(&quot;&quot;.join(output)))\n\nplt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))\nplt.title(&#039;Image&#039;); plt.show()\n<\/pre><\/div>\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Some Sample Outputs<\/h2>\n\n\n\n<p>Now let&#8217;s look at some sample outputs after implementing the code mentioned above on various credit card images.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"375\" height=\"258\" src=\"https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_01.png\" alt=\"Credit Card Reader Output 01\" class=\"wp-image-21952\" srcset=\"https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_01.png 375w, https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_01-300x206.png 300w\" sizes=\"auto, (max-width: 375px) 100vw, 375px\" \/><figcaption>Credit Card Reader Output 01<\/figcaption><\/figure><\/div>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"375\" height=\"257\" src=\"https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_02.png\" alt=\"Credit Card Reader Output 02\" class=\"wp-image-21953\" srcset=\"https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_02.png 375w, https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_02-300x206.png 300w\" sizes=\"auto, (max-width: 375px) 100vw, 375px\" \/><figcaption>Credit Card Reader Output 02<\/figcaption><\/figure><\/div>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"375\" height=\"258\" src=\"https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_03.png\" alt=\"Credit Card Reader Output 03\" class=\"wp-image-21954\" srcset=\"https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_03.png 375w, https:\/\/www.askpython.com\/wp-content\/uploads\/2021\/09\/credit_card_Reader_Output_03-300x206.png 300w\" sizes=\"auto, (max-width: 375px) 100vw, 375px\" \/><figcaption>Credit Card Reader Output 03<\/figcaption><\/figure><\/div>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>I hope you understood the concept and loved the outputs. Try out the same with more images and get amazed with the results.<\/p>\n\n\n\n<p>Happy Coding! &#x1f607;<\/p>\n\n\n\n<p>Want to learn more? Check out the tutorials mentioned below:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li><a href=\"https:\/\/www.askpython.com\/python\/examples\/python-detecting-contours\">Python: Detecting Contours<\/a><\/li><li><a href=\"https:\/\/www.askpython.com\/python\/examples\/edge-detection-in-images\">Boxplots: Edge Detection in Images using Python<\/a><\/li><li><a href=\"https:\/\/www.askpython.com\/python\/examples\/image-processing-in-python\">Image Processing in Python \u2013 Edge Detection, Resizing, Erosion, and Dilation<\/a><\/li><\/ol>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n","protected":false},"excerpt":{"rendered":"<p>The purpose of this tutorial is to help you build a credit card reader with OpenCV and machine learning techniques to identify the card number and the card type. Let us get started! Also read: How to read Images in Python using OpenCV? Introduction to OCR We have always seen Optical character recognition being used [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":21938,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-21936","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-examples"],"blocksy_meta":[],"_links":{"self":[{"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/posts\/21936","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/comments?post=21936"}],"version-history":[{"count":0,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/posts\/21936\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/media\/21938"}],"wp:attachment":[{"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/media?parent=21936"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/categories?post=21936"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/tags?post=21936"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}