Services
Security research services for frontier AI and connected systems.
OrbitCurve delivers hands-on offensive research across hardware, IoT, and GenAI systems. We prioritize real exploitability, engineering-grade remediation, and defensible assurance.

Hardware Hacking
View full hardware hacking scope
Hardware Hacking & IoT Security Testing
OrbitCurve provides deep, manual hardware hacking and IoT security testing for connected products and embedded systems. Our assessments focus on real exploitability and engineering grade remediation across the entire product stack: device hardware, firmware, companion apps, cloud services, and management APIs.
All work is performed only with written authorization and a clearly defined Rules of Engagement (RoE).
Scope options
IoT security rarely lives in one layer. We scope assessments according to your product architecture and threat model. Scope can include:
- IoT device hardware
- Firmware (static + dynamic analysis)
- Local services on device (web UI, SSH/Telnet, custom daemons)
- Network communications (device to cloud, device to device)
- Wireless (Wi Fi / BLE / RF where applicable)
- Companion mobile applications (Android/iOS)
- Web portals / admin consoles
- Cloud APIs and back end services
- Provisioning / onboarding / recovery / maintenance flows
- Update mechanisms (OTA + local)
Assessment methodology
Hardware and IoT testing is a combination embedded security research, traditional application security, and protocol/API testing.
A typical assessment includes:
- Scope & timeline definition
In scope models, firmware versions, environments, and test boundaries.
- Attack surface mapping
Interfaces, services, protocols, update paths, and trust boundaries.
- Device and firmware analysis
Extract and analyze firmware; validate on device services and security controls.
- Ecosystem security testing
Companion apps, management portals, and cloud APIs that control devices at scale.
- Advanced physical resilience testing (optional)
Fault injection and side channel evaluation where the threat model requires it.
- Reporting & remediation guidance
Clear evidence, impact, and prioritized fixes.
Detailed testing areas
Physical interfaces & debug access
We identify and validate the risk of exposed or misconfigured interfaces, including:
- UART/console access and service modes
- JTAG/SWD presence and access restrictions
- SPI/I²C and peripheral communications exposure
- USB and maintenance ports
- Boot mode behaviors and recovery paths
- “Factory” or “service” functions that bypass normal controls
Firmware extraction & analysis
We assess whether attackers can obtain and analyze firmware, and what they can do with it:
- Firmware extraction feasibility (based on available artifacts and authorized access)
- Static analysis for risky patterns and unsafe functionality
- Discovery of secrets (credentials, tokens, keys, endpoints) and insecure storage
- Backdoor like functionality, hidden admin paths, or unsafe debug code
- Use of outdated/vulnerable components and insecure configuration defaults
Secure boot & trust chain
We evaluate the integrity guarantees of the boot chain (where applicable):
- Secure boot presence and enforcement
- Integrity verification behavior and bypass resistance
- Debug builds and verbose logs that leak sensitive details
- Rollback/downgrade behavior that enables older vulnerable images
Firmware update mechanisms (OTA + local)
Update channels are one of the highest impact IoT attack surfaces. Testing can include:
- Update authenticity/integrity enforcement (signing and verification logic)
- Update transport protections and downgrade resistance behaviors
- Recovery/update flows that bypass trust controls
- OTA APIs (authorization, abuse controls, replay related risks where relevant)
Network services & communications
We validate the security posture of exposed services and communications patterns:
- Enumeration of local and remote exposed services
- Insecure service configurations and unsafe endpoints
- Device to cloud authentication and session handling behaviors
- Trust boundary validation between device, local network, and cloud
- Susceptibility to interception/man in the middle style weaknesses where relevant
- Overly permissive communications and weak enforcement of expected endpoints
Wireless security (when in scope)
We assess wireless onboarding and operational security posture, including:
- Wi‑Fi and BLE configuration flows and authentication assumptions
- Pairing/bonding and role/ownership enforcement
- Weaknesses in local discovery, provisioning, and management behavior
- RF surface review when relevant to the device category and deployment context
Access control & authentication
IoT products often fail at “who is allowed to do what” across device, app, and cloud:
- Default credentials and weak authentication patterns
- Authorization gaps in local services and cloud management APIs
- Privilege escalation paths through roles/permissions
- Replay/relay risk patterns in onboarding and management flows (where applicable)
- Debug interface misuse leading to unauthorized access
Advanced physical attack testing
Fault injection (glitching) evaluation
For products with higher assurance requiremUpdate mechanisms (OTA + local)ents, we can assess resilience against controlled fault conditions intended to bypass critical security checks, such as:
- Validation of whether integrity checks, lockouts, or security critical logic can be influenced under fault conditions
- Review of fault countermeasures and error handling robustness
This work is explicitly scoped and performed only when the threat model includes realistic physical access and the customer authorizes this category of testing.
Side channel evaluation (power/EM/timing)
When cryptographic operations protect device identity, secure boot, secure updates, or sensitive communications, we can assess whether secrets could be at risk through leakage channels such as:
- Power analysis (side channel leakage assessment)
- Electromagnetic (EM) analysis (where applicable)
- Timing leakage considerations in sensitive operations
Side channel evaluation is most relevant when devices rely on cryptography and have adversaries with realistic physical proximity or lab access.
Advanced physical testing often requires multiple device samples and lab friendly conditions; we define these requirements clearly during scoping.
Common issue categories we evaluate
Firmware integrity
- Outdated/vulnerable components and unsafe configurations
- Unsigned or weakly validated firmware images (where applicable)
- Hardcoded credentials, weak crypto usage, insecure APIs
- Untrusted code loading patterns and unsafe execution paths
- Backdoor like behavior or unsafe debug features
Network & communications
- Insecure device to cloud trust assumptions
- Weak encryption posture or downgradeable protections
- Protocol weaknesses and over permissive communications
- Remote management exposure and unsafe defaults
Authentication & authorization
- Default credentials / weak auth mechanisms
- Privilege escalation risks across device/app/cloud
- Debug interfaces enabling unauthorized access
- Sensitive data exposure in logs, memory, or crash artifacts
Hardware level resilience
- Exposed debugging ports and insecure bootloader configurations
- Weak secrets storage design and insufficient protections
- Susceptibility to fault injection bypasses (where applicable)
- Side channel leakage risks in cryptographic operations (where applicable)
Data protection
- Insecure local storage of sensitive data
- Excessive logging (on device or server side)
- Weak key management and predictable crypto patterns
- Lack of secure wipe mechanisms where required
API & cloud control plane
- Authorization gaps enabling fleet level abuse
- Insecure OTA update APIs
- Weak abuse controls and monitoring gaps for device management functions
Deliverables
Every assessment includes:
- Executive summary (risk themes, business impact, priorities)
- Technical report with evidence and reproduction guidance
- Findings prioritized by impact and likelihood
- Remediation guidance (quick wins + long term structural fixes)
- Optional: remediation workshop and retest plan
We aim to document not only the findings, but also the work performed and the verified attack surface, so engineering teams can build durable fixes.
Engagement formats
- Baseline IoT / Hardware Security Assessment
Broad coverage across interfaces, firmware, boot/update chain, comms, and ecosystem.
- Deep Dive (targeted validation)
Additional depth on specific components, protocols, or high risk flows.
- Advanced Physical Testing Add on
Fault injection and/or side channel evaluation aligned to high assurance objectives.
- Release cycle retainer
Ongoing review of new builds, change impact analysis, and regression testing.
What we need from you (typical)
- Written Authorization to Test and Rules of Engagement
- Device models/firmware versions in scope (and any constraints)
- Device samples and required accessories (as applicable)
- Firmware/update packages (if available) and basic architecture overview
- A technical point of contact for clarifications
- For advanced physical testing: additional units and clear acceptance boundaries

AI Red Teaming
View full AI red teaming scope
GenAI systems fail differently than traditional apps. OrbitCurve AI Red Teaming systematically attacks LLMs, agents, and GenAI applications to uncover jailbreaks, prompt injection paths, data exfiltration, tool abuse, safety bypasses, and supply chain risks before adversaries, or regulators/customers do.
Authorized testing only. Written permission + Rules of Engagement (RoE) required.
System level view
We test the model and the system around it:
- Prompts (system/dev/user), templates, routing
- Orchestrators / agent code paths and safety middleware
- Tools, plugins, connectors, and action surfaces
- RAG: ingestion, retrieval policy, vector store, permissions
- Identity/authz boundaries (users/roles/tenants)
- Logging/telemetry, secrets, rate limits, abuse controls
- Third party models/platforms, deployment pipeline, CI/CD
Real exploit paths
We chain failures into concrete impact:
- Unauthorized data access (docs, memory, logs, connectors)
- Policy bypass + unsafe instruction following
- Unintended actions via tools (privilege escalation, request forgery, destructive actions)
- Cross user / cross tenant leakage
- Cost/DoS via token flooding, loops, retry abuse
- Supply chain exposure (plugins/models/datasets/deployments)
Risk & compliance alignment
Findings mapped to business impact and aligned (as required) to:
- NIST AI RMF
- OWASP LLM/GenAI risk categories
- MITRE ATLAS
- Your internal AI governance policies
Who this is for
- Teams building/operating LLM products: assistants, copilots, agentic workflows
- Security/risk/compliance leaders needing defensible assurance
- Platform/ML engineering teams integrating models, tools, and data into critical workflows
Coverage: GenAI specific attack campaigns
Prompt & context attacks
- Direct prompt injection (instruction override)
- Indirect prompt injection (“prompt in data” via docs/web)
- Jailbreak resilience and policy bypass
- Hidden instruction/tool routing manipulation
Data leakage & exfiltration
- System prompt / secret disclosure (tokens, keys, endpoints)
- RAG leakage (bad filtering, weak scoping, over broad retrieval)
- Cross user/tenant leakage via memory, caching, session mix ups
- Logging/telemetry leakage (prompts/responses captured insecurely)
Tool & agent abuse
- Unauthorized tool calls and connector escalation
- Parameter injection into tool calls (unsafe args, hidden directives)
- Workflow hijacking, action replay, confused deputy paths
- Excessive agency (actions beyond user intent)
RAG & untrusted data risks
- Malicious/poisoned documents
- Retrieval manipulation and trust boundary breaks
- Untrusted sources feeding the model (prompt in data, SEO poisoning patterns)
Availability & cost abuse
- Token flooding / long context degradation
- Tool call loops, retry storms, runaway agents
- Rate limit bypass patterns and resource exhaustion vectors
Optional: supply chain & pipeline review (in scope)
- Third party models, plugins, datasets
- CI/CD and deployment workflows
- Model registry and serving configuration integrity
Why OrbitCurve for AI Red Teaming
- Model & behavior focus: jailbreaks, misalignment, unsafe behavior scenarios
- System & supply chain focus: tools/connectors/RAG/identity where real risk accumulates
- Hybrid option: red teaming + orchestrator/agent code review + safety middleware review
- Engineering + governance ready: backlog ready issues, hardening plan, mapping to frameworks
Engagement process
- Kick off & objectives
Define business goals, threat scenarios, scope, RoE, comms.
- AI threat modeling
Inventory assets, data flows, tools, autonomy level, dependencies, trust boundaries.
Identify priority attack paths and misuse cases.
- Environment & access
Test tenant/sandbox, seeded accounts, API keys, least privilege, representative data.
- Baseline evaluation
Smoke tests, policy sanity checks, initial prompt suites to calibrate campaigns.
- Adversarial campaigns
Manual first testing + targeted automation. Early escalation of critical issues.
- Report, hardening & validation
Exploit narratives + prioritized fixes + retest window to confirm mitigations.
Deliverables
AI threat model & attack surface
Assets, trust boundaries, data flows, tools, third party dependencies, priority attack paths.
Findings & exploit narratives
Repro prompts/transcripts, evidence, impact, affected components (model/RAG/tools/APIs/UI), severity, fixes.
Evaluation suite (optional)
Curated scenarios/prompt sets + harness guidance for CI/CD or offline re runs after changes.
Hardening plan & retest
Guardrail/config/code/architecture recommendations prioritized by risk + validation retest.
What we need from you
- Written authorization + RoE
- In scope assets (URLs/endpoints/repos/connectors/environments)
- Test accounts/roles and tenant boundaries to verify
- Prefer staging/sandbox (production only by explicit agreement)
- A single technical POC for rapid clarification
Start an AI Red Teaming engagement
Stress test your LLMs, agents, and GenAI applications against realistic adversaries and turn the results into durable defenses.
Contact OrbitCurve to request scope & quote.