$120 - $150 •
BlueCargo provides real-time information for trucking companies that move cargo from marine ports to first warehouses. We are building a platform that will be a one-stop shop for truckers to dispatch, schedule their workload and move containers efficiently
Our startup was founded by two female entrepreneurs, graduated from Y Combinator (2018 batch), and has raised a 4 million seed round. We are at the beginning of an exciting growth phase where we have already developed product-market fit, a working platform, and hundreds of daily active users. We are looking for a Senior Software Engineer to grow the team and deliver a product to bring us to the next level.
If working to bring technology to the logistics industry sounds exciting, then we’d like to connect with you! We are looking for a highly experienced professional who has created several full stack web applications and has experience with AWS.
Skills: Python, Hive, ETL, Data Modeling, PostgreSQL, Amazon Web Services (AWS)
BlueCargo is a fast-growing startup based in Los Angeles. We are building a software to handle transportation of containers from the ports to the first warehouses by truck - also called first mile delivery. We are bringing the Freight Tech revolution.
The startup was founded by two female entrepreneurs, graduated from Y Combinator (2018 batch), and has raised a 4 million seed round. We are at the beginning of an exciting growth phase where we have already developed product-market fit, a working platform, and hundreds of daily active users.
We are looking for Data Engineers to design and implement our data pipelines and visualization platform. If working to bring technology to the logistics industry sounds exciting, then we’d like to connect with you!
Our Current Tech
CircleCIand other continuous integration tools
- Technical team: 4 (objective: x2 in one year)
- Location: Los Angeles
- Design, build and operate BlueCargo’s data pipelines with a focus on performance and reliability
- Participate in new feature development for container tracking and data visualisation platform
- Propose and evaluate storage technologies and methodologies with an eye toward scalability and performance
- Design and implement data pipelines that handle a lot of data streaming
- Lead data ingestion (web-scraping, APIs, or any other protocoles) strategy
- Maintain a culture of data accuracy and data-driven decisions
- Decide Database infrastructure that will be the new norme in the freight industry
In addition to the following technical skills, we are looking for PROBLEM SOLVERS with an entrepreneurial mindset.
- 3+ years programming in Python, or Java/kotlin/scala is fine as well!
- 3+ years architecting with both SQL and no-SQL data stores.
- Experience designing schemas and maintaining representations for low latency, request- cycle queries
- Experience with streaming platforms (PubSub, Kafka, Kinesis) and near-real-time data pipelines
- Working knowledge of statistics and experimental design
- Comfortable building and maintaining data infrastructure in the cloud (AWS preferred)
- Experience in data sourcing technologies (external APIs, web-scraping, EDI files, etc.) and building data management platforms
- Autonomous in your work, proactive while working with cross-functional team to build creative solutions
- Preferably with prior experience in a high-velocity startup environment
- Either living in Los Angeles or willing to relocate to LA
- Perks: Medical benefits + unlimited PTO
- Fun perks: flexible and international environment
- Support/Community: Enjoy being a member of a Y Combinator company!
- Contact: [email protected], [email protected]
Our current tech:
Stack: Python/Django (Backend), React/JS (Frontend), AWS services (Lambda, RDS, S3), PostgreSQL (Database), Node-Red, CircleCI and other continuous integration tools Team: 3 engineers