Skip to main content

r/websecurity

members
online

should i learn php, js before diving into websecurity? should i learn php, js before diving into websecurity?

I'm sorry as i don't know if it's the right subreddit to ask this (⁠;⁠;⁠;⁠・⁠_⁠・⁠) lemme briefly introduce about myself then I'll get to the main point.

i am originally CS backgroung although my programming skills were not good, but i found my interest in cybersecurity so since few months i started learning basics to get into cybersecurity, networking from jeremy IT lab, linux basics from pwn(.)college , basic 25 rooms on tryhackme, few retired machines on HTB [with walkthrough (⁠〒⁠﹏⁠〒⁠)] , i have done only 2 learning path from postswigger web security academy but the recent labs needs me to require write php payloads (also JS) , i only know js syntax never actually used it to make something so that counts as 0 knowledge, right

so my question is , is it foolish that i have been doing labs without having knowledge of JS, PHP, should i stop doing the learning path to learn php and JS first?


A daily probiotic chew for dogs with sensitive stomachs.
Image A daily probiotic chew for dogs with sensitive stomachs.


TL;DR – Independent Research on Advanced Parsing Discrepancies in Modern WAFs (JSON, XML, Multipart). Seeking Technical Peer Review TL;DR – Independent Research on Advanced Parsing Discrepancies in Modern WAFs (JSON, XML, Multipart). Seeking Technical Peer Review

hiiii guys,

I’m currently doing independent research in the area of WAF parsing discrepancies, specifically targeting modern cloud WAFs and how they process structured content types like JSON, XML, and multipart/form-data.

This is not about classic payload obfuscation like encoding SQLi or XSS. Instead, I’m exploring something more structural.

The main idea I’m investigating is this:

If a request is technically valid according to the specification, but structured in an unusual way, could a WAF interpret it differently than the backend framework?

In simple terms:

WAF sees Version A

Backend sees Version B

If those two interpretations are not the same, that gap may create a security weakness.

Here’s what I’m exploring in detail:

First- JSON edge cases.

I’m looking at things like duplicate keys in JSON objects, alternate Unicode representations, unusual but valid number formats, nested JSON inside strings, and small structural variations that are still valid but uncommon.

For example, if the same key appears twice, some parsers take the first value, some take the last. If a WAF and backend disagree on that behavior, that’s a potential parsing gap.

Second- XML structure variations.

I’m exploring namespace variations, character references, CDATA wrapping, layered encoding inside XML elements, and how different media-type labels affect parsing behavior.

The question is whether a WAF fully processes these structures the same way a backend XML parser does, or whether it simplifies inspection.

Third- multipart complexity.

Multipart parsing is much more complex than many people realize. I’m looking at nested parts, duplicate field names, unusual but valid header formatting inside parts, and layered encodings within multipart sections.

Since multipart has multiple parsing layers, it seems like a good candidate for structural discrepancies.

Fourth- layered encapsulation.

This is where it gets interesting.

What happens if JSON is embedded inside XML?

Or XML inside JSON?

Or structured data inside base64 within multipart?

Each layer may be parsed differently by different components in the request chain.

If the WAF inspects only the outer layer, but the backend processes inner layers, that might create inspection gaps.

Fifth – canonicalization differences.

I’m also exploring how normalization happens.

Do WAFs decode before inspection?

Do they normalize whitespace differently?

How do they handle duplicate headers or duplicate parameters?

If normalization order differs between systems, that’s another possible discrepancy surface.

Important:

I’m not claiming I’ve found bypasses. This is structural research at this stage. I’m trying to identify unexplored mutation surfaces that may not have been deeply analyzed in public research yet.

I would really appreciate honest technical feedback:

Am I overestimating modern WAF parsing weaknesses?

Are these areas already heavily hardened internally?

Is there a stronger angle I should focus on?

Am I missing a key defensive assumption?

This is my research direction right now. Please correct me if I’m wrong anywhere.

Looking for serious discussion from experienced hunters and researchers.


[Tool] Rapid Web Recon: Automated Nuclei Scanning with Client-Ready PDF Reporting [Tool] Rapid Web Recon: Automated Nuclei Scanning with Client-Ready PDF Reporting

Hi everyone,

I wanted to share a project I’ve been working on called Rapid Web Recon. My goal was to create a fast, streamlined way to get a security "snapshot" of a website—covering vulnerabilities and misconfigurations—without spending hours parsing raw data.

The Logic: I built this as a wrapper around the excellent Nuclei engine from ProjectDiscovery. I chose Nuclei specifically because of the community-driven templates that are constantly updated, which removes the need to maintain static logic myself.

Key Features:

  • Automated Workflow: One command triggers the scan and handles the data sanitization.

  • Professional Reporting: It generates a formatted PDF report out of the box.

  • Executive & Technical Depth: The report includes a high-level risk summary, severity counts, and detailed findings with remediation advice for the client.

  • Mode Selection: Includes a default "Stealth" mode for WAF-protected sites (like Cloudflare) and an "Aggressive" mode for internal network testing.

Performance: A full scan (WordPress, SSL, CVEs, etc.) for a standard site typically takes about 10 minutes. If the target is behind a heavy WAF, the rate-limiting logic ensures the scan completes without getting the IP blacklisted, though it may take longer.

GitHub Link: https://github.com/AdiMahluf/RapidWebRecon

I’m really looking for feedback from the community on the reporting structure or any features you'd like to see added. Hope this helps some of you save time on your audits!