Overview
Pulse is tackling one of the most persistent challenges in data infrastructure: extracting accurate, structured information from complex documents at scale. We have a breakthrough approach to document understanding that combines intelligent schema mapping with fine-tuned extraction models where legacy OCR and other parsing tools consistently fail.
We are a small, fast-growing team of engineers in San Francisco powering Fortune 100 enterprises, YC startups, public investment firms, and growth-stage companies. We are backed by tier 1 investors and growing quickly.
What makes our tech special is our multi-stage architecture:
Layout understanding with specialized component detection models
Low-latency OCR models for targeted extraction
Advanced reading-order algorithms for complex structures
Proprietary table structure recognition and parsing
Fine-tuned vision-language models for charts, tables, and figures
If you are passionate about the intersection of computer vision, NLP, and data infrastructure, your work at Pulse will directly impact customers and shape the future of document intelligence.
What we are looking for
5 days in-office at our San Francisco office
Eager to learn and adapt quickly
-
Prior startup or founding experience is a plus
About the Role
As a Solutions Engineer at Pulse, you will work at the intersection of customer deployments and core engineering. You will help design, deploy, and operate Pulse in real production environments, often inside customer infrastructure, with a strong focus on reliability, performance, and accuracy.
This role is hands-on and technical. You will work closely with customer engineering teams while partnering with Pulse’s platform, ML, and product teams to ensure successful deployments and continuous improvements.
Responsibilities
Deploy, operate, and debug Pulse services in Kubernetes-based environments
Work directly with customers to support production rollouts, pilots, and ongoing operations
Configure and optimize extraction pipelines, schemas, and validation workflows
Diagnose accuracy, latency, and infrastructure issues across distributed systems
Build internal tools and customer-facing utilities in Python to support deployments
Support API integrations, webhooks, and downstream data delivery workflows
Collaborate with core engineering to surface product gaps and improve platform reliability
Own operational excellence for customer-facing deployments, including monitoring and incident response
Requirements
2+ years building and operating production systems, new grads with strong infra experience considered
Strong experience with Kubernetes, including deploying, debugging, and operating services in production
Proficiency in Python for backend services, tooling, and automation
Experience working with APIs, distributed systems, and asynchronous workflows
Comfort working directly with customer engineering teams on technical deployments
Strong debugging skills across application, infrastructure, and data layers
Clear written and verbal communication, with a bias toward ownership and execution
Nice to have
Experience running workloads on AWS, GCP, or Azure
Familiarity with Docker, Helm, Terraform, or similar tooling
Experience with observability tooling (metrics, logs, tracing)
-
Background in data infrastructure, ML platforms, or document processing systems
Sponsorship
Sponsorship available.
Compensation and benefits
Competitive base salary plus equity, performance-based bonus, relocation assistance for Bay Area moves, daily meal stipend, medical, vision, and dental coverage.
Sponsored