How to Build AI Hackbots
Introduction
Hackbots are not a future thing. They are a now thing. There are a handful of ways to build one, and most of them work to varying degrees. The approach I’m going to walk through here is simple and leans heavily on a single idea: train your hackbot the same way you’d train a junior pentester, by feeding it the books and blogs it should already have read.
This post is centered around application security and is optimized for MCP integration with web proxy tools. Traditionally, I am a Burp Suite advocate but at the time of writing this I am experimenting with Caido. With Caido I generate a Personal Access Token (PAT) that enables Claude Code to authenticate to my Caido instance, and then I can use Caido skills to give my hackbot the ability to analyze HTTP history, tamper with requests, and push findings back into the Caido UI. Burp Suite has its own MCP options and most of what I describe will work the same way against Burp. Additionally, if you are not using Claude, many of these concepts will still be applicable to the tooling of your choice. If you don’t have an MCP-enabled proxy, the hackbot still works through curl requests, but you lose traffic analysis which is a vital component.
Disclaimer
Before getting into structure and code, understand that you can write rules that say “no destructive actions” and then have accounts locked out, systems taken offline, and databases wiped. Guardrails should be implemented, but they can’t be fully trusted unfortunately. This is especially true during long sessions where context gets compacted and rules are forgotten. A human-in-the-loop and permission prompts before each action helps. Neither fully removes the risk of destructive behavior. If you use hackbots, understand the risk and mitigate it as much as possible.
Workspace Setup
Claude Code ships with a global config directory. On my machine that’s ~/.claude/. You can dump skills, agents, and rules there and they apply to every session you ever start, anywhere on disk. For the purpose of developing hackbots, you do not want to use this workspace. You will want to create a project root to establish boundaries. This will provide better organization and prevent wasting tokens with normal usage. Project-scoped config keeps your hackbot from leaking into the rest of your life. I created ~/WAPT/, short for Web Application Penetration Test.
Inside your project root, create a .claude/ folder. Claude Code (and tools like Cursor, Cline, and Continue) walk up from your current working directory looking for these config folders, and they merge what they find.
The Structure
Here’s what mine looks like. Yours may look different, but this is a good starting point. If you’re not using Claude, adapt accordingly.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
WAPT/
├── README.md # workspace map for future-me
├── .claude/
│ ├── CLAUDE.md # rules of engagement (the briefing)
│ ├── settings.json # permission allowlist + read denylist
│ ├── skills/ # 60+ vuln-class + workflow skills
│ │ ├── http-triage/SKILL.md
│ │ ├── jwt-analysis/SKILL.md
│ │ ├── xss/SKILL.md
│ │ ├── sqli/SKILL.md
│ │ ├── ssrf/SKILL.md
│ │ ├── idor/SKILL.md
│ │ ├── oauth/SKILL.md
│ │ ├── start-target/SKILL.md
│ │ ├── scope-checker/SKILL.md
│ │ ├── primitives-store/SKILL.md
│ │ ├── report-writer/SKILL.md
│ │ └── ...
│ ├── agents/
│ │ ├── validator.md # skeptical finding triager
│ │ └── exploit-builder.md # turns rough PoC into clean PoC
│ └── templates/
│ ├── note.md
│ ├── lead.md
│ ├── primitive.md
│ ├── finding.md
│ ├── report.md
│ └── target-claude.md
├── primitives/ # cross-target reusable gadgets/oracles
│ ├── README.md
│ ├── index.json
│ ├── search.ts
│ └── by-class/
├── Projects/
│ └── <target>/ # one folder per target
│ ├── .claude/CLAUDE.md # target-specific scope/creds reference
│ ├── scope.md
│ ├── creds.md
│ ├── notes/
│ ├── leads/
│ ├── primitives/
│ ├── findings/
│ ├── reports/
│ └── recon/
└── Persona_<NAME>.txt # test identity for registrations
A few things to call out in that tree before we go through them:
- The
.claude/folder is the brain. Everything inside it (CLAUDE.md, skills, agents, templates) gets loaded automatically when you runclaudefrom anywhere at or under this directory. - The
Projects/folder is where actual hunting happens. Each target gets its own subfolder. Notes, scope, creds, and findings stay isolated per target. - The
primitives/folder at the root is shared across all targets. It’s where reusable building blocks (oracles, gadgets, recurring tricks) get filed so they’re searchable next time, on the next target. - The persona file is for test accounts that are frequently used in self-registrations.
Skills
If you take only one thing away from this post, take this: the value of a hackbot is not in the model, it’s in the skills. Skills are a bundle of instructions, examples, and reference material. Skills are what turn it into something that knows your craft.
Skills Setup Example
My current set is around 60 skills. They split roughly into three groups.
Workflow skills
start-target: bootstraps a new target folderscope-checker: validates a host against the target’s scope filesession-search: ripgrep across past Claude sessionsjs-analysis: pulls JS bundles, recovers source maps, hunts secretssource-review: code-review pass withsemgrepreport-writer: turns a finding into a submission-ready reportprimitives-store: saves and searches reusable gadgets across targetshttp-triage: single-request decision tree (which attack class fits this request)vuln-index: symptom-to-skill lookup (when you don’t know which class applies)
Vulnerability skills
- Injection:
sqli,nosql-injection,ldap-injection,xpath-injection,command-injection,xxe,ssti,ssi-esi-injection,crlf-injection,xslt-injection,rsql-injection,orm-injection,parameter-pollution - Server-side:
ssrf,lfi-rfi,path-traversal,secondary-context-path-traversal,file-upload,request-smuggling,web-cache-poisoning,host-header-injection,subdomain-takeover - Auth and identity:
idor,mass-assignment,jwt-analysis,saml,oauth,csrf,cors,session,cookie-security,account-takeover - Client side:
xss,xssi,dom-clobbering,prototype-pollution,css-injection,clickjacking,reverse-tab-nabbing,postmessage,open-redirect,client-side-template-injection,client-side-path-traversal - Logic and protocol:
business-logic,race-condition,type-juggling,graphql,protobuf-analysis,insecure-deserialization,input-fuzzing,crypto-hygiene,security-headers,dependency-confusion
Helpers
caido-mode(sits at the global level so it works everywhere) wires up the Caido MCP integration
Skills Data Sources
You can write your own skills but this will be incredibly time consuming. You should focus on key sources you want your skills built from, and then provide claude or another platform instructions for creating a methodology-first writeup that you (the human) review and revise. The more details and guidance you provide in your prompt, the better your skill will be. A very basic example of a prompt flow is provided below.
Prompt 1: Create a skill for EXAMPLE using https://example.com. Ensure to create a methodology-first approach so that vulnerabilities can be systematically discovered. Take your time and be very thorough. You are a senior penetration tester and expert bug bounty hunter that is looking for high signal vulnerabilities.
Prompt 2: Contribute to the EXAMPLE skill using https://example2.com. Ensure to once again prioritize methodology and context for various scenarios. IMPORTANT: Do not add any duplicate or redundant information to the EXAMPLE skill after analyzing the new data source. Keep development focused on high signal vulnerabilities and prioritize techniques that lead to high/critical severity findings. There is no rush. Take your time to ensure that this done correctly on the first try.
Prompt 3: Review the data sources I provided for my new EXAMPLE skill. Do a final comprehensive review to see if you can make any improvements or fix issues. Be very picky and maintain high standards. Ensure that the application and vulnerability context is properly mapped for a strong methodology. Feel free to propose questions for me if you have any questions about this task.
Here are some of my favorite data sources for web hacking skill development. Consider this a starting point and not a conclusive data set for your hackbot.
- PortSwigger web security academy
- PortSwigger research
- PortSwigger XSS cheat sheet
- PortSwigger URL validation bypass cheat sheet
- OWASP WSTG
- OWASP vulnerabilities
- CTBB hacker notes
- CTBB research lab
- CTBB writeups
- Sam Curry’s blog
- Beyond XSS article
- client-side-bugs-resources
- portswigger-websecurity-academy (lab solutions and scripts)
- HackerOne Hacktivity
- bugbountyhunting.com
- Awesome-Bugbounty-Writeups
- PayloadsAllTheThings
- HackTricks
- SecLists
- OneListForAll
- PayloadBox
- Bug-Bounty-Wordlists
You can also transcribe video content, so another technique I used was selecting technical videos from sources such as https://www.youtube.com/@criticalthinkingpodcast or DEF CON talks, and then using the transcribed data for skill development.
The pattern when building a skill: pick the class, pull together the relevant sources, analyze and build from the sources with structured prompts, then review heavily. Running multiple rounds of revisions is strongly advised.
Tip: Do not use multiple data sources or try to create multiple skills in a single prompt. Even when using the latest models with the maximum context window and effort, it will skip stuff. For example if you ask to have skills created using
PayloadsAllTheThings, entire vulnerability classes will get skipped. Instead, develop one skill at a time with one source at a time. Additionally a lot of your sources will have identical information. Be sure that every revision run you do, you are making sure to eliminate any duplicate data. Prioritize additional revisions on large data sources.
Agents
Skills run in your main conversation context. They are part of the same model that’s reading your code, browsing your Caido history, and writing your notes. Agents are different. An agent runs in a separate context window with its own scoped tool access. The main session dispatches it, the agent runs, the agent returns a single message back to the main session. That isolation is the whole point. There are several different use cases for agents, but I will showcase two.
Validator Agent
Automation is great, but it is notorious for false positives. You will want to minimize false positives as much as possible during testing. This validator agent reads a findings/<slug>.md file cold, with no context from the hunting session, and returns a verdict:
1
2
3
4
5
6
VERDICT: <VALID | LIKELY_FP | INSUFFICIENT_POC | OUT_OF_SCOPE>
SEVERITY: <Critical | High | Medium | Low | Info>
CONFIDENCE: <high | medium | low>
REASONING: ...
ISSUES: ...
REQUIRED EVIDENCE TO SUBMIT: ...
The agent is told, in its system prompt, to default to skepticism. Details of common false positives are also included. Some examples are CORS with Allow-Origin: * and no credentials is info, not high. Self-XSS with no delivery vector is info. Open redirect alone is a low. Bug-blending (multiple bugs in one report) gets flagged. I run it before every report. It catches inflated severity reliably. When you encounter issues, update the markdown file. Your hackbot will require continuous development and optimizations if you want it to be optimal. This same concept applies to the other concepts in this post as well.
Exploit Builder Agent
Once a finding is validated, it is great to have a basic working PoC. The exploit-builder agent does that. It picks the artifact based on the bug class. A web vuln gets a single curl or a self-contained index.html. An auth bug gets a 2-3 step curl sequence with each step labeled. RCE or SQLi gets a minimal Python script (stdlib + requests, no dependencies). It strips unused headers, parameterizes credentials at the top, and adds expected output as comments. Output ends up under findings/<slug>-poc.<ext>, referenced from the finding file, ready for the report.
Skills That Could Have Been Agents
A few of my skills are close to the line. report-writer could be an agent. js-analysis could be an agent. start-target could be an agent. They’re skills because they live in the main conversation, where the user can see what they’re doing in real time and intervene. Agents are for tasks where the independence is the point. The validator’s value comes from not seeing the hunting context. If it ran in the main session, it would inherit my biases, and the entire purpose of running it would dissolve.
Templates
Every note, lead, finding, report, and primitive in this workspace is generated from a template. The templates live under .claude/templates/ and they’re short.
1
2
3
4
5
6
note.md → an observation, one screen
lead.md → a hypothesis with a test plan
primitive.md → a reusable gadget for the primitives store
finding.md → a confirmed bug with PoC, used until promoted to a report
report.md → submission-ready
target-claude.md → the per-target CLAUDE.md template, used by start-target
Every observation starts as a note. Some notes graduate to leads. Some leads graduate to findings. Some findings graduate to reports. Templates make graduation cheap, because the structure is already familiar from the previous tier. There’s no “now I need to reformat into report shape” friction at the end. When I say “save this as a finding” the model reads templates/finding.md, fills in the fields, and writes the file. That only works because the templates are stable and short. If they were 200 lines of optional fields, the model would skip half of them and produce garbage.
CLAUDE.md
The single most important file in the workspace is .claude/CLAUDE.md. It’s the first thing Claude reads. It contains the rules of engagement. Here’s an example of how mine is structured.
- Hard rules: stay in scope, no destructive actions, “PoC or GTFO,” always take notes.
- Note hierarchy: (the funnel I described above).
- OOB testing placeholder: during skill development, I provided a placeholder value for OOB testing. This placeholder value is noted in my
CLAUDE.mdfile and I can tell it to instead use a different active web hook. - Caido: always proxy through Caido and use the
caido-modeskill. Persistent history, visible replay tabs, findings pushed to the Findings tab in Caido. - How to work: invoke skills when relevant, but don’t be limited by them. Cap subagents at 2-3. Keep
notes/_session.mdupdated for long autonomous runs. - Severity calibration: defaults to skepticism, with specific examples (CORS, self-XSS, WAF bypass, paywall, open redirect).
- Output preferences: third person perspective, technical, no flavor text.
- Risk posture: Specify how you will be using permissions.
settings.json
.claude/settings.json is where you tell Claude Code which commands it can run without asking for permission. It’s a simple allow/deny list.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"permissions": {
"allow": [
"Bash(curl:*)",
"Bash(wget:*)",
"Bash(jq:*)",
"Bash(rg:*)",
"Bash(node:*)",
"Bash(bun:*)",
"Bash(python:*)",
"Bash(nmap:*)",
"Bash(dig:*)",
"Bash(openssl:*)"
],
"deny": [
"Bash(rm -rf /:*)",
"Read(<home>/.password-store/**)",
"Read(**/.env)",
"Read(**/.env.*)"
]
}
}
Tighten it according to your appetite for risk.
primitives
Every hunter I know has a mental list of “things that have worked before.” ?debug=true reveals stack on Spring apps. _method=DELETE flips POST to DELETE on Rails. eyJhbGciOiJub25lIn0.eyJzdWIiOiJhZG1pbiJ9. is the alg:none forge. Someone tweets a new JWT confusion bug and you mentally file it under “try on JWT targets.”
The problem is that the mental list lives in your head. It does not always transfer.
The primitives/ folder fixes that. It’s a flat list of reusable gadgets, indexed by class, technology, and vendor, with each entry stored as a small markdown file plus a row in index.json. The primitives-store skill knows how to add and search. When the bot starts on a new target, it greps the primitives store for matches against the detected stack and proposes things to try.
1
2
3
4
5
6
7
8
9
10
~/WAPT/primitives/
├── index.json # search catalog
├── search.ts # bun script for queries
└── by-class/
├── ssrf-target/
├── jwt-attack/
├── oauth-provider-quirk/
├── subdomain-takeover-fingerprint/
├── deserialization-gadget/
└── ...
A primitive must apply to more than one target to earn a slot. Otherwise it’s a finding, not a primitive. That rule keeps the store from drifting into “every cool bug I’ve ever found” and turning into an unsearchable mess. Search before adding. Update existing entries when you find another instance.
README.md
The README at the workspace root (~/WAPT/README.md) is simply documentation. It covers the layout, workflows, and all the basics. If I haven’t used this hackbot in a while or want to share it with others, this provides clear instruction. It also gets read by the bot the first time I cd into the workspace, which is useful in its own right.
Demonstration: OWASP Juice Shop
The OWASP Juice Shop web application was used for demonstration purposes. It’s intentionally vulnerable, ships as a Docker container, has tons of challenges, and is widely used as a teaching tool. It is also very diverse in its bug surface.
Getting Started
Prompt: Let’s start a new target. The target is OWASP Juice Shop, running locally at http://localhost:3000. It’s an intentionally vulnerable training app, so scope is “everything.” Use start-target to get rolling. My webhook for OAST testing will be [REDACTED].
This triggers the start-target skill, which scaffolds Projects/juice-shop/ with the standard layout: scope.md, creds.md, notes/, leads/, findings/, reports/, recon/, plus a target-specific .claude/CLAUDE.md. With the project kicked off, I decided to set Claude to auto mode and let it do its thing. After an hour, 81% of the hacking challenges were solved and the only ones not solved were due to Docker security restrictions. These challenges had disabledEnv: 'Docker' in their challenge metadata. The verifier middleware (utils.isChallengeEnabled) returns false whenever the runtime is a Docker container, so the solve check never runs even when the underlying bug is exploited successfully. There were a few other challenges related to Web3 and NFT that could not be completed because they all require a real in-browser Ethereum wallet (MetaMask) injecting window.ethereum, plus on-chain interaction with juice-shop’s deployed test contracts.
Breaking Rules
To summarize this: this hackbot solved everything within its capabilities and did so quickly. But similar to a recent study I did with AI-powered browser extension hacking on PortSwigger labs, it cheated. The AI had a goal of solving the challenges as fast as possible and that’s what it did.
Here are some key takeaways from this experiment.
- This testing was done with zero interaction from me after kick off. All challenges were solved with
curlrequests. No web proxy MCP was used for traffic analysis. - The
/api/challengesendpoint was found and it returned the full description and hint for every challenge. According to Claude it was the “biggest shortcut”. - The
/rest/admin/application-configurationendpoint leaked several challenge answers. - Mid-session, Claude cloned juice-shop from GitHub and grepped
routes/verify.tsfor every challenge key. This gave easy PoC requests to solve challenges. - The
lib/startup/registerWebsocketEvents.tsfile showed multiple challenges just need asocket.emit('verify*', magic-string). No browser required. - Canonical juice-shop credentials (Jim’s ncc-1701, Bender’s OhG0dPlease1nsertLiquor!, Bjoern’s reversed-email-base64, admin’s admin123, the z85 coupon trick) came from training-data memory, not testing.
Another interesting finding is that I have guardrails for no destructive actions in my hackbot, but since I said everything on my localhost:3000 target is in scope, it made clear instructions in the scope file that destructive actions were ok again. It never actually dropped tables or did something destructive that required an environment reset, but I found it mildly concerning that it went out of its way to say these actions were suddenly ok. The screenshot below showcases the generated scope file with these details, and information that it utilized to help solve the challenges.
This is Claude’s takeaway on how it solved the challenges:
So the answer is: about a third of the solves were prior knowledge, about half came from cloning the source and grepping for challenge keys (which is the canonical way to grind the score-board fast), and the rest were API-disclosed answers + the websocket shortcut. The HTTP-side techniques I “demonstrated” (SQLi, JWT forging, mass assignment, race conditions, NoSQL $ne, XXE, CSRF, etc.) are real attacks that I really did execute against the live server — but I knew exactly which endpoint and which payload to hit because I’d just read the verifier function. On a target without an open-source repo and a published challenge taxonomy, the same 90 solves would have taken days, not an hour, and I would have missed many of them.
So the real question is: would this have actually taken days against a target without a public source repo and a published challenge taxonomy? Which of the bugs would I have found at all, and which would I have walked right past? And would the methodical, skill-driven approach be better utilized? Unfortunately Juice Shop has several limitations when experimenting with hackbots. The best targets for testing will be real ones (with authorization of course).
How You Should Use Hackbots
For more productive learning, let’s step away from challenge completion and the chaos of unconstrained hacking. Let’s instead return to the skills we built, the Caido MCP integration, and showcase workflows for a more methodical approach to AI-assisted bug hunting.
Replay Requests
One of my favorite ways to use hackbots is by pointing them at requests I’m already investigating. You can do this by specifying a Replay tab or by providing the request path in your prompt. From there, you can let the AI try its own techniques or invoke the skills you’ve built.
Login Request Example
The example below targets the Juice Shop login functionality. I send an invalid login attempt and capture the request in Caido.
Next, I invoke the caido-mode skill against the captured request. I deliberately give no further direction because I want to see what the hackbot comes up with on its own. It spins up several new Replay tabs, each with a different attack payload.
One useful feature of the Caido skill is that it surfaces findings directly in the Caido UI as the hackbot works.
Once validated, the finding can also be previewed as a clean markdown file.
Analyze HTTP History
Lots of modern applications are noisy and filled with junk traffic. Combing through it manually is tedious, but it’s exactly the kind of thing your hackbot excels at. You can walk the application and let the AI surface interesting endpoints, parameters, and headers from the history. Once the interesting requests are flagged, begin your testing.
Analyze JavaScript
Reviewing JavaScript by hand is tedious. Minified bundles, sprawling dependencies, and endpoints scattered across files make it slow going. Tools like jxscout do a great job here, and your hackbot will too. Most models handle JS analysis impressively well out of the box, so you don’t need much custom tooling to get value, but building skills around it (extracting endpoints, flagging secrets, mapping client-side routes) will sharpen the results.
Prompt: Analyze all of the JavaScript on my target at http://localhost:3000/. There are already several JS requests in my HTTP history. You may find more files by looking on your own. Perform JS analysis on each one and give me a wordlist of endpoints in /PATH format. Also look for secrets, vulnerabilities, and primitives.
Pulling and beautifying the Angular bundle surfaced 113 endpoints.
Static analysis also flagged four bypassSecurityTrustHtml sinks fed by user-controlled input.
For quick wins, you can request PoC requests from the JavaScript analysis and then manually validate the client-side vulnerabilities. If your hackbot has access to a browser the validation could be automated.
Targeted Hunting
Interested in a specific bug class or hacking technique? Maybe you’re chasing a particular CVE pattern across your targets? This is where crafted skills really shine. Instead of letting the AI roam freely and hoping it stumbles onto something, you point it at exactly what you want tested with the playbook you’ve already built.
Prompt: Use my jwt-analysis skill to look for vulnerabilities.
The analysis surfaced multiple JWT-related vulnerabilities. It also reported which techniques from the skill didn’t succeed, which is useful for understanding the target’s defenses.
After validation, files are created and a PoC for reproducing the vulnerability is provided.
Maintenance
The workspace is text. Maintaining it is the same skill set as maintaining a small documentation repo. Not the most exciting thing in the world but it’s important. When you find limitations in your agents or workflows, simply update the instructions. If new web hacking techniques come out, make new skills or update existing ones. For the best results, you will want to continuously develop your hackbot.
A maintenance habit worth adopting: let the bot help you maintain itself. The hackbot has, by design, complete context on the workspace. It knows where every skill lives, what it does, and how skills cross-reference each other. When I want to refactor a skill or rename something, I lean on the bot heavily. It’s much less likely than I am to miss a reference in another skill, because it can grep instantly and read every result. Asking it to “rename the csrf-tokens skill to csrf and update everywhere it’s referenced” is a five-second ask that would take me twenty minutes to do safely by hand. But of course, validate its work.
Going Further
The setup I’ve described is intentionally minimalist. Vulnerability skills, two agents, a primitives store, a Caido MCP. It works. But there is a lot of room above this floor. Here are some ideas to take your hackbot even further.
- Recon automation: a
reconskill that wrapsbbot,httpx, and dumps results into the target’srecon/folder, then summarizes the interesting hits back to you. Pre-built with the right flags for bounty work. - Wordlist integration: a skill that knows where SecLists, payload-box, OneListForAll, and PayloadsAllTheThings live on disk.
- Browser integration: Use Playwright MCP or an alternative and let the hackbot drive a real browser. That unlocks DOM XSS PoCs, postMessage testing, and any class that depends on actual browser execution.
- More command-line tools:
sqlmap,ffuf,nuclei, etc. These can be wrapped in skills with sensible defaults. - Web portal: if you have a team, a small portal for findings and reports could be nice.
- Better long-run telemetry: a simple
notes/_session.md-style log already works, but a structured one (JSONL with timestamps, skill invocations, findings produced) makes it easier to review what happened during overnight runs.
Open-Source Hackbots
There are several open-source projects in this space, and one of the fastest ways to improve your own hackbot is to study others. Some popular ones are listed below.
- shuvonsec/claude-bug-bounty
- agamm/claude-code-owasp
- GH05TCREW/pentestagent
- bugbasesecurity/pentest-copilot
When studying these projects, here are a few good questions to ask:
- Where does their methodology live? Is it in skills, prompts, system prompts, agents, separate files?
- How do they validate findings? Do they trust the model? Do they have a separate validator? Do they require the human in the loop, and where?
- What’s their test surface? What targets do they exercise on? CTF-style? Real bounty programs? Local labs?
Closing
A hackbot is a junior pentester with infinite stamina and a curated reading list. Skills are the reading list. Templates are the worksheets. Agents are the second pair of eyes. The workspace is the desk. The model is the body, but it’s not the brain. The brain is what you wrote down for it. Build the workspace once, train the skills once, and you’re left with something that scales. New target, same workflow. New bug class, add a skill. Long autonomous run, the workspace handles it. Compaction, the rolling session log handles it. Validator catches the false positives. Exploit builder cleans up the PoC. Report writer drafts the writeup.
It is genuinely fun to use. It is also a real piece of leverage in offensive engagements. If you’ve been thinking about doing this, do it. Even if you are against AI, as a hacker you should have a grasp on it. It is important for us to understand how technology works, and regardless of our opinions or feelings, it does not look like it is going away any time soon.
Thanks for reading. If you have questions about building hackbots or want to connect, feel free to reach out on LinkedIn or X.























