Agent Camp 2026 · Red Bank, NJ · Finite × HRF
What the camp
actually taught.
Two days. A small room. Real systems getting built in real time. Not a conference — a build sprint with engineers, operators, and people doing serious work.
A building cohort run by
Finite and Human Rights Foundation.
Finite builds open-source AI infrastructure for operators. HRF funds freedom technology for dissidents and activists worldwide. Agent Camp brings both worlds together — practical AI engineering with a real-world mandate: tools that give individuals agency over their own intelligence stack.
Three levels of
AI interaction
The same underlying model. Completely different outcomes. What changes is the scaffolding — tools, memory, autonomy.
-
L1
Traditional ChatOpen a chat interface, ask a question, copy the answer, paste it manually. Most people live here. Useful — not leverage.
-
L2
Vibe CodingScripts connect the AI to other tools. Automates the copy-paste. Faster, but still fragile. Human glue at every step.
-
L3
AgentsModel + tools + loop + memory. Plans, executes, and remembers across sessions. You define the goal. The system figures out the path.
What the camp engineers
actually explained.
-
01
The brain hasn't changed. The plumbing has.L1, L2, L3 — all the same model. The scaffolding is what changes: the tools it reaches, the memory it reads, the loop it runs in. You're not building new AI. You're building better infrastructure around the same one.
-
02
Memory is a folder of text files. That's it.No magic. Reads markdown files at session start, works in the context window, writes updated notes at session end. What falls off is gone. The agent is only as good as what it can read at startup.
-
03
The adoption gap is a business opportunity.84% of 8.1 billion people have never touched an AI tool. The 0.04% using coding scaffolds is 3.2 million people. The "AI revolution" is a very small room that thinks it's the whole world.
-
04
Privacy is a posture, not a setting.Cloud AI = trust relationship, not a technical guarantee. Real privacy: inference that never leaves your hardware. Path: OpenCode over ClaudeCode → local models → encrypted inference.
-
05
Open source keeps the market honest.LLaMA, Mistral, DeepSeek — behind the frontier, but free and competitive. They're why inference costs dropped 90% in 18 months. Without them, closed labs price unilaterally. That's not idealism — that's market mechanics.
-
06
Human guardians, not just human gates.Each agent needs someone responsible for calibration — not the technical builder, but the domain expert who validates the process is correct. The person who knows the workflow and the person who builds the system are often different. Design for that separation.
-
07
The constraint is knowing what to automate.At $10/month for serious individual usage, cost isn't the bottleneck. The hard part is identifying which parts of your workflow are worth automating — and having the discipline to let the machine run without taking it back.
-
08
We also built two websites. And filed our IRS taxes.Two full websites shipped live before leaving New Jersey — this one and Apex Strategy. Also used agents to complete and file US federal taxes. Yes, really. The IRS did not crash. We're calling it a win.
What AI actually
costs
One token ≈ one word. Prices have dropped ~90% in 18 months and are still falling. Full stack breakdown →