{"id":19099,"date":"2026-04-19T20:55:14","date_gmt":"2026-04-19T20:55:14","guid":{"rendered":"https:\/\/www.askpython.com\/?p=19099"},"modified":"2026-04-20T09:14:44","modified_gmt":"2026-04-20T09:14:44","slug":"color-detection","status":"publish","type":"post","link":"https:\/\/www.askpython.com\/python\/examples\/color-detection","title":{"rendered":"Color Detection using Python &#8211; Beginner&#8217;s Reference"},"content":{"rendered":"\n<p>Color detection is one of the most approachable problems in computer vision. The idea is simple: look at any pixel in an image and figure out what color it represents. Yet doing this reliably across different lighting conditions, camera types, and backgrounds requires more than just comparing RGB numbers. This guide walks through building a color detector in Python with OpenCV, covering both the RGB distance method and the HSV range method, with working code for still images, webcam feeds, and an interactive click-to-detect tool.<\/p>\n\n\n\n<p>By the end you have a complete, working color detection pipeline you can adapt for image segmentation, object filtering, visual inspection, and interactive applications.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Computers Represent Colors: The RGB Model<\/h2>\n\n\n\n<p>Every color on a screen is built from red, green, and blue channels. In OpenCV, each channel gets a value between 0 and 255. Pure red is <code>(255, 0, 0)<\/code>, pure green is <code>(0, 255, 0)<\/code>, and cyan is <code>(0, 255, 255)<\/code>. When you blend all three channels at full intensity you get white; when all are zero you get black.<\/p>\n\n\n\n<p>The challenge with RGB-based color detection is that the same physical color produces very different RGB values under a bright lamp versus soft daylight versus a phone flash. The hue component of HSV is more stable across lighting conditions, which is why HSV is the preferred space for most color detection tasks.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Setting Up the Environment<\/h2>\n\n\n\n<p>Install OpenCV and NumPy. Pandas is optional but useful if you want to map detected colors to human-readable names.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\npip install opencv-python numpy pandas\n\n<\/pre><\/div>\n\n\n<p>The imports you use throughout this guide:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\nimport cv2\nimport numpy as np\nimport pandas as pd\n\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\">Loading and Displaying an Image<\/h2>\n\n\n\n<p>OpenCV reads images in BGR format by default, which trips up most newcomers because standard image processing libraries and Matplotlib expect RGB. Keep this in mind when you convert or display.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\nimage = cv2.imread(&quot;colorful_scene.jpg&quot;)\nheight, width, channels = image.shape\nprint(f&quot;Image loaded: {width}x{height} pixels, {channels} channels&quot;)\n\n<\/pre><\/div>\n\n\n<p>To display the image with Matplotlib in the correct colors, convert BGR to RGB first:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\nrgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\">Detecting Colors with RGB Distance<\/h2>\n\n\n\n<p>The most straightforward way to detect a color is to measure how close each pixel is to a target RGB value. The Euclidean distance in RGB space gives you a score: lower means a closer match. You then threshold that distance to decide which pixels count as the color.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\ndef detect_color_by_rgb(image, target_rgb, threshold=50):\n    output = image.copy()\n    for y in range(image.shape&#x5B;0]):\n        for x in range(image.shape&#x5B;1]):\n            b, g, r = output&#x5B;y, x]\n            distance = np.sqrt((r - target_rgb&#x5B;0])**2 +\n                               (g - target_rgb&#x5B;1])**2 +\n                               (b - target_rgb&#x5B;2])**2)\n            if distance &lt; threshold:\n                output&#x5B;y, x] = &#x5B;0, 255, 0]  # highlight matching pixel in green\n            else:\n                output&#x5B;y, x] = &#x5B;b, g, r]\n    return output\n\n# Detect orange: RGB (255, 165, 0)\nresult = detect_color_by_rgb(image, (255, 165, 0), threshold=50)\n\n<\/pre><\/div>\n\n\n<p>This approach is conceptually clean but computationally slow on large images because it loops over every pixel in Python. Vectorize the operation with NumPy instead:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\ndef detect_color_by_rgb_fast(image, target_rgb, threshold=50):\n    b, g, r = cv2.split(image)\n    distance = np.sqrt((r.astype(float) - target_rgb&#x5B;0])**2 +\n                       (g.astype(float) - target_rgb&#x5B;1])**2 +\n                       (b.astype(float) - target_rgb&#x5B;2])**2)\n    mask = distance &lt; threshold\n    result = image.copy()\n    result&#x5B;mask] = &#x5B;0, 255, 0]\n    return result\n\n<\/pre><\/div>\n\n\n<p>RGB distance works reasonably well in controlled lighting. When the light changes, the same physical color shifts its RGB values enough to fall outside the threshold. This is the fundamental limitation of RGB-based detection.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The HSV Color Space and Why It Works Better<\/h2>\n\n\n\n<p>HSV separates the hue (the actual color, independent of brightness) from saturation (how vivid the color is) and value (how bright the pixel is). This separation is exactly what you need for robust color detection because it lets you match on hue alone while ignoring lighting variation.<\/p>\n\n\n\n<p>Think of it this way: a bright red object and a dark red object have very different RGB values but share the same hue. In HSV, they share the same H channel. That is the key property that makes HSV superior for color-based segmentation.<\/p>\n\n\n\n<p>Convert an image to HSV like this:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\nhsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)\n\n<\/pre><\/div>\n\n\n<p>OpenCV stores hue as a value from 0 to 179 (not 0 to 255), which is a quirk of the implementation. Saturation and value still go from 0 to 255. You define a color by specifying a range in HSV:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\nlower_blue = np.array(&#x5B;100, 50, 50])\nupper_blue = np.array(&#x5B;130, 255, 255])\nmask = cv2.inRange(hsv_image, lower_blue, upper_blue)\nresult = cv2.bitwise_and(image, image, mask=mask)\n\n<\/pre><\/div>\n\n\n<p>The <code>cv2.inRange<\/code> call returns a binary mask: 255 where the pixel falls inside the range, 0 everywhere else. Applying that mask with <code>bitwise_and<\/code> shows only the pixels that match your color criteria.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Finding the Right HSV Range for Any Color<\/h2>\n\n\n\n<p>The hardest part of HSV color detection is figuring out the correct range for the color you want. OpenCV trackbars let you adjust ranges in real time, which is the fastest way to calibrate:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\nimport cv2\nimport numpy as np\n\ndef nothing(x):\n    pass\n\ncap = cv2.VideoCapture(0)\ncv2.namedWindow(&quot;Trackbars&quot;)\ncv2.createTrackbar(&quot;L-H&quot;, &quot;Trackbars&quot;, 0, 179, nothing)\ncv2.createTrackbar(&quot;U-H&quot;, &quot;Trackbars&quot;, 179, 179, nothing)\ncv2.createTrackbar(&quot;L-S&quot;, &quot;Trackbars&quot;, 0, 255, nothing)\ncv2.createTrackbar(&quot;U-S&quot;, &quot;Trackbars&quot;, 255, 255, nothing)\ncv2.createTrackbar(&quot;L-V&quot;, &quot;Trackbars&quot;, 0, 255, nothing)\ncv2.createTrackbar(&quot;U-V&quot;, &quot;Trackbars&quot;, 255, 255, nothing)\n\nwhile True:\n    _, frame = cap.read()\n    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)\n    l_h = cv2.getTrackbarPos(&quot;L-H&quot;, &quot;Trackbars&quot;)\n    u_h = cv2.getTrackbarPos(&quot;U-H&quot;, &quot;Trackbars&quot;)\n    l_s = cv2.getTrackbarPos(&quot;L-S&quot;, &quot;Trackbars&quot;)\n    u_s = cv2.getTrackbarPos(&quot;U-S&quot;, &quot;Trackbars&quot;)\n    l_v = cv2.getTrackbarPos(&quot;L-V&quot;, &quot;Trackbars&quot;)\n    u_v = cv2.getTrackbarPos(&quot;U-V&quot;, &quot;Trackbars&quot;)\n    lower = np.array(&#x5B;l_h, l_s, l_v])\n    upper = np.array(&#x5B;u_h, u_s, u_v])\n    mask = cv2.inRange(hsv, lower, upper)\n    cv2.imshow(&quot;Frame&quot;, frame)\n    cv2.imshow(&quot;Mask&quot;, mask)\n    if cv2.waitKey(1) &amp; 0xFF == ord(&quot;q&quot;):\n        break\ncap.release()\ncv2.destroyAllWindows()\n\n<\/pre><\/div>\n\n\n<p>Move the trackbars until the mask cleanly isolates your target color with no noise. Then read off the final values and hardcode them for your production script.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Mapping Detected Colors to Human-Readable Names<\/h2>\n\n\n\n<p>Reporting detected colors as RGB tuples is functional but not user-friendly. A common approach uses a CSV file that maps RGB values to color names. One widely used dataset contains over 800 color entries with RGB values and names.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\nimport pandas as pd\nimport numpy as np\nimport cv2\n\n# Load the color database\nindex = &#x5B;&quot;color&quot;, &quot;color_name&quot;, &quot;hex&quot;, &quot;R&quot;, &quot;G&quot;, &quot;B&quot;]\ncsv = pd.read_csv(&quot;colors.csv&quot;, names=index, header=None)\n\ndef get_color_name(r, g, b):\n    minimum = 1000\n    color_name = &quot;Unknown&quot;\n    for i in range(len(csv)):\n        d = abs(r - int(csv.loc&#x5B;i, &quot;R&quot;])) + \\\n            abs(g - int(csv.loc&#x5B;i, &quot;G&quot;])) + \\\n            abs(b - int(csv.loc&#x5B;i, &quot;B&quot;]))\n        if d &lt;= minimum:\n            minimum = d\n            color_name = csv.loc&#x5B;i, &quot;color_name&quot;]\n    return color_name\n\n<\/pre><\/div>\n\n\n<p>This uses Manhattan distance rather than Euclidean, which works well in practice and avoids the square root on every comparison.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Building an Interactive Click-to-Detect Tool<\/h2>\n\n\n\n<p>Combine the color name lookup with a mouse callback so clicking anywhere on the image shows you the color name at that pixel:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\ndef click_event(event, x, y, flags, param):\n    if event == cv2.EVENT_LBUTTONDOWN:\n        b, g, r = image&#x5B;y, x]\n        color_name = get_color_name(r, g, b)\n        text = f&quot;{color_name} -&gt; R:{r} G:{g} B:{b}&quot;\n        cv2.rectangle(image, (x, y), (x + 200, y - 25), (0, 0, 0), -1)\n        cv2.putText(image, text, (x, y),\n                    cv2.FONT_HERSHEY_SIMPLEX, 0.7,\n                    (255, 255, 255), 2)\n        cv2.imshow(&quot;Image&quot;, image)\n\nimage = cv2.imread(&quot;colorful_scene.jpg&quot;)\ncv2.imshow(&quot;Image&quot;, image)\ncv2.setMouseCallback(&quot;Image&quot;, click_event)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n<\/pre><\/div>\n\n\n<p>Click anywhere on the image and a label appears showing the color name and its RGB values at that exact pixel coordinate.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Detecting Colors in a Webcam Live Feed<\/h2>\n\n\n\n<p>The same HSV range method scales to live video. Grab frames from the webcam, apply the mask, and display both the raw feed and the color mask side by side:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n\ncap = cv2.VideoCapture(0)\n\n# Tuned HSV range for detecting a green object\nlower_green = np.array(&#x5B;35, 50, 50])\nupper_green = np.array(&#x5B;85, 255, 255])\n\nwhile True:\n    _, frame = cap.read()\n    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)\n    mask = cv2.inRange(hsv, lower_green, upper_green)\n    result = cv2.bitwise_and(frame, frame, mask=mask)\n    cv2.imshow(&quot;Original&quot;, frame)\n    cv2.imshow(&quot;Green Detection&quot;, result)\n    if cv2.waitKey(1) &amp; 0xFF == ord(&quot;q&quot;):\n        break\n\ncap.release()\ncv2.destroyAllWindows()\n\n<\/pre><\/div>\n\n\n<p>If your green detection mask is too noisy, tighten the saturation and value bounds. If it misses valid pixels, relax the bounds. The trackbar script above is the best way to find the right values for any color under your specific lighting.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Common Pitfalls and How to Avoid Them<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Lighting variation breaks RGB detection<\/strong>: Always prefer HSV in real-world conditions. Use RGB only under stable, controlled lighting.<\/li>\n\n<li><strong>HSV hue wraps around at red<\/strong>: Red appears at both hue near 0 and near 179 in OpenCV. If you are detecting red objects, you need two HSV ranges and must combine them with a logical OR.<\/li>\n\n<li><strong>cv2.imread returns None if the path is wrong<\/strong>: Always check if the image loaded successfully before processing. A missing file produces a silent None that crashes on the next line.<\/li>\n\n<li><strong>Integer overflow in distance calculations<\/strong>: When converting to float for distance math, ensure NumPy uses float64 or you risk integer overflow in edge cases.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Frequently Asked Questions<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Why does HSV detection work better than RGB in different lighting conditions?<\/h3>\n\n\n\n<p>Because the hue channel encodes the actual color wavelength independent of intensity. A red object under dim light and the same red object under bright sunlight have nearly identical hue values but very different RGB values. By thresholding on hue and saturation, you ignore brightness entirely and get consistent detection regardless of lighting.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How do I detect multiple colors at once?<\/h3>\n\n\n\n<p>Call <code>cv2.inRange<\/code> separately for each color range, then combine the masks with <code>cv2.bitwise_or<\/code>. If you have n colors, you have n pairs of HSV lower and upper bounds, n masks, and a combined mask that marks any pixel matching any of your target colors.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What is the best way to tune HSV bounds for a specific color?<\/h3>\n\n\n\n<p>Use the real-time trackbar script shown earlier. It is faster than any other method because you see the mask update as you move each trackbar. Once the mask isolates your target cleanly, read off the six trackbar values and plug them into your production script.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How do I handle detecting the color red, which spans the hue range?<\/h3>\n\n\n\n<p>Define two separate HSV ranges: one for the low end of red (hue 0 to around 10) and one for the high end (hue around 170 to 179). Generate two masks, then combine them with <code>cv2.bitwise_or<\/code>. The combined mask correctly identifies red pixels across the wraparound boundary.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Can I use color detection for object tracking?<\/h3>\n\n\n\n<p>Yes. After you isolate your target color with a mask, compute the centroid of all masked pixels using <code>cv2.moments<\/code>. The centroid gives you a coordinate you can track frame to frame, which is the foundation for simple color-based object trackers.<\/p>\n\n","protected":false},"excerpt":{"rendered":"<p>Color detection is one of the most approachable problems in computer vision. The idea is simple: look at any pixel in an image and figure out what color it represents. Yet doing this reliably across different lighting conditions, camera types, and backgrounds requires more than just comparing RGB numbers. This guide walks through building a [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":66384,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-19099","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-examples"],"blocksy_meta":[],"_links":{"self":[{"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/posts\/19099","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/comments?post=19099"}],"version-history":[{"count":8,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/posts\/19099\/revisions"}],"predecessor-version":[{"id":66350,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/posts\/19099\/revisions\/66350"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/media\/66384"}],"wp:attachment":[{"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/media?parent=19099"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/categories?post=19099"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.askpython.com\/wp-json\/wp\/v2\/tags?post=19099"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}