Staff Data Engineer - Nepal
- Industry Other
- Category Programming/Software Development
- Location Kathmandu, Nepal
- Expiry date Aug 29, 2025 (5 days left)
Job Description
About COVU:
COVU is a venture-backed insurtech startup modernizing independent agencies through AI-native operations. We unify data, communications, and workflows to empower agents and automate insurance servicing at scale. Our platform blends cloud infrastructure, deep integrations, and cutting-edge AI to build the future of intelligent insurance distribution.
Role Overview:
We’re hiring a Staff Data Engineer to lead the development of our central data platform, which powers agent workflows, BI dashboards, and AI services across all domains—insurance, CRM, communications, and more. You’ll build and scale pipelines from agency AMSs, insurance carriers (via integrations or RPA), and third-party data providers (e.g., LexisNexis, Verisk). You’ll ensure the platform is secure, observable, high-performing, and ready to serve real-time and batch needs across internal teams. This role also includes automating manual ETL processes, especially in onboarding and data mapping, and collaborating cross-functionally to deliver usable data to BI, AI, and operations teams.
What You’ll Do:
Own the Data Platform
- Build high-scale, multi-source pipelines (AMS, carriers, CRM, comms, 3rd-party enrichment)
- Architect for real-time + batch processing, with observability and access control baked in
Automate with AI
- Use LLMs or custom rules to automate onboarding workflows like mapping or normalization
- Build tools to reduce repetitive ETL work and accelerate data integration velocity
Support Multi-Domain Use
- Enable performance for insurance data, customer comms, CRM/case records, and more
- Collaborate with BI, AI, and product teams to deliver clean and usable data models
Ensure Quality and Compliance
- Implement QA, logging, monitoring, RBAC, and PII protection
- Maintain integrity and trust in the data used by our platform and agents
Who You Are:
- 8+ years of experience in data engineering or analytics engineering
- Expert in Python, SQL, and DBT (plus: Java)
- Proven experience with AWS-native tools: Redshift, Lambda, Glue, Step Functions, EventBridge, etc.
- Experience working with operational databases (e.g., PostgreSQL, DynamoDB, etc.)
- Experience integrating structured + unstructured data from external vendors or APIs
- Familiar with BI tools (QuickSight, Tableau) and modern DataOps practices
- Able to flex between platform building, BI support, and automation
- Bonus: Insurance data familiarity (e.g., Epic, ACORD, Verisk), or LLM-driven data automation
Why Join Us:
- Build the data backbone of an AI-native platform across multiple domains
- Work on high-impact pipelines and automations that drive the business
- Hybrid collaboration with a sharp, mission-driven LA-based team
- Shape how data empowers our agents, AI, and business decisions