$160k - $180k •
We started as app developers who just wanted a developer-friendly API for push notifications. Finding no good solution, we built one ourselves.
Today, we are the leading solution for push notifications, in-app messaging, and email. We support over 900,000 developers. OneSignal is available on every platform and development environment, letting content creators focus on quality user engagement instead of complex implementation.
Covid-19's Impact on OneSignal
Covid-19 has accelerated OneSignal's growth. We've seen a 20%+ increase in new accounts created for OneSignal each day, and a 20%+ increase in daily message delivery volume.
We are growing faster than ever, and hiring in all departments. We hope you'll apply and we look forward to meeting you!
Skills: C#, C++, Java, Kubernetes, Python, Hive, Kafka, SQL, Hadoop, Data Warehousing, ETL, Data Modeling, Data Analytics, Bash/Shell
We're looking for an engineer interested in developing, maintaining, and scaling data pipelines that empower our organization to make data-driven decisions in all areas of business.
You’ll work on huge datasets (more than 150 billion events a month), building systems that make messaging more powerful and relevant for more than 1 million developers.
We have built a data lake that ingests terabyte-scale data and gets queried by a MPP SQL engine, which is used daily by our internal data users. We plan to extend this usage to our core product and you will take on a core role to scale and architect our data infrastructure. You will also be laying the foundation for customer-facing data features that will help millions of websites and apps to reach their goals.
What you'll do:
- Identify innovative applications of data to enable new features in our core product
- Work closely with the product and BI teams to help understand data better
- Maintain and develop data pipelines
- Architect solutions to address our data pipeline scaling needs
- Open source contributions
Skills and experience:
- 3+ years of experience working with the Hadoop ecosystem. (HDFS, Hive)
- 2+ years of experience working with data pipelines (Airflow, Oozie)
- Experience working with MPP databases (Presto, Impala)
- Experience writing with at least one statically typed language such as Rust, Java, C++, etc.
- Experience with scripting languages, e.g. Bash, Python,
- Good to have experience in R, Matlab, VBA, C#, stream processing (Kafka)
- Proficiency in written and oral communications
- Ability to collaborate well on a team
- Can deliver solutions independently
- Friendliness, empathy, modesty, and love of learning
Preferred skills and experience:
- Experience operating data infrastructure on Kubernetes
- Experience writing a custom Presto/Hive plugin
- Experience working with structured data serialization (ProtoBuf, Cap’n Proto)
- Strong understanding of the CQRS pattern/event sourcing