Experienced in Offensive Security - as a Malware Engineer, Red Teamer, or Ethical Hacker.
Deep interest in Generative AI Agents - must have used generative AI and ideally AI Agents or Copilots before and understand related concepts.
Experience or certifications related to pen-testing and malware are a big plus. We welcome diverse candidates of all genders, races, and other identifications.
About the company
Emerged from the halls of MIT and fueled by experts from Stanford, Google, Apple, and Fortune 100, CrackenAGI team forges a new future with a mission to drastically improve global digital efficiency and secure it against adversaries by bridging the gap between AI and AGI to supercharge manual SDLC and Cybersecurity processes.
Their flagship product, rooted in their unique Generative AI Agent Framework and Models, offers seamless, reliable, and cheap autonomous end-to-end Quality and Security testing.
About the role
Responsibilities
- Software Engineering: Hands-on engineering and testing of the product in various capacities with a focus on cybersecurity features - integrate cybersecurity tools and techniques into the CrackenAGI product.
- Security Features Product Management Participation: Participate in enterprise client-facing discussions on cybersecurity topics and help transfer learnings in the cybersecurity domain to product management decisions.
- Cybersecurity Features Demos: Employ CrackenAGI for cybersecurity tasks to test its features and demo the product.
- Research & Development: Stay updated with the latest in offensive security techniques. Provide relevant insights to the team and integrate new innovations.
- CrackenAGI Red Teaming: Pentesting CrackenAGI itself including jailbreaking, prompt injections, etc.
Must-have:
- Experience as Offensive Security Engineer: Strong understanding of common vulnerabilities, exploitation techniques, and mitigation strategies. Ideally previous experience in offensive security roles such as Malware Engineer, Red Teamer, or Ethical Hacker with skills prioritized in the following order:
- Network Security Assessments
- Social Engineering & Phishing Campaigns
- Experience as Software Engineer: Demonstrable expertise in Python and C programming languages, with a portfolio of past projects showcasing proficiency. Familiarity with the software development process, from requirements gathering to deployment and maintenance.
- AI Interest / Experience: Hands-on experience working with generative AI tools, and understanding AI agents and Copilots concepts. Ability to leverage AI for novel solutions in the security domain, integrating AI capabilities with traditional security practices.
- Language & Communication Skills: Fluency in both English and Ukrainian. Strong verbal communication skills, with the ability to present findings and demos, and collaborate with both technical and non-technical stakeholders in a dynamic startup environment.
Nice-to-have:
- Advanced Degrees: Master's or Ph.D. in Computer Science, Cybersecurity, or related fields.
- Industry Involvement: Active participation in security conferences, workshops, or CTFs.
- Publication & Research: Contributions to security research, blogs, or whitepapers showcasing expertise.
- Certifications: Recognized certifications in offensive security, such as OSCP, OSCE, or similar.
- Tool Development: Experience in developing or contributing to open-source security tools or frameworks.
What we offer
- Competitive compensation package including equity
- Work alongside the world’s best professionals and researchers
- Growing, vibrant, and energetic startup environment
- Flexibility to work fully remotely if desired
- Flexibility to work hybrid, or in-office in the Bay Area
Similar Jobs
- View Job
Full Stack Software Engineer, Autonomous Agents
Palo Alto - View Job
Senior Software Engineer, Security/Privacy, Google Cloud AI
Sunnyvale - View Job
Software Engineer III, Security/Privacy, Google Cloud AI
Sunnyvale - View Job
Software Engineer, AI & Graph
Redwood City - View Job
Software Engineer - AI Tooling
Palo Alto