Senior Software Engineer, Data Acquisition

Remote, USA

People Data Labs

People Data Labs builds people data. Use our dataset of 1.5 Billion unique person profiles to build products, enrich person profiles, power predictive modeling/ai, analysis, and more.

View all jobs at People Data Labs

Apply now Apply later

Note for all engineering roles: with the rise of fake applicants and AI-enabled candidate fraud, we have built in additional measures throughout the process to identify such candidates and remove them.

About Us

People Data Labs (PDL) is the provider of people and company data. We do the heavy lifting of data collection and standardization so our customers can focus on building and scaling innovative, compliant data solutions. Our sole focus is on building the best data available by integrating thousands of compliantly sourced datasets into a single, developer-friendly source of truth. Leading companies across the world use PDL’s workforce data to enrich recruiting platforms, power AI models, create custom audiences, and more.

We are looking for individuals who can balance extreme ownership with a “one-team, one-dream” mindset. Our customers are trying to solve complex problems, and we only help them achieve their goals as a team. Our Data Engineering & Acquisition Team ensures our customers have standardized and high quality data to build upon. 

You will be crucial in accelerating our efforts to build standalone data products that enable data teams and independent developers to create innovative solutions at massive scale. In this role, you will be working with a team to continuously improve our existing datasets as well as pursuing new ones. If you are looking to be part of a team discovering the next frontier of data-as-a-service (DaaS) with a high level of autonomy and opportunity for direct contributions, this might be the role for you. We like our engineers to be thoughtful, quirky, and willing to fearlessly try new things. Failure is embraced at PDL as long as we continue to learn and grow from it.

What You Get to Do

  • Use and develop web crawling technologies to capture and catalog data on the internet
  • Support and improve our web crawling infrastructure
  • Structure, define, and model captured data, providing semantic data definition and automate data quality monitoring for data that we crawl
  • Develop new techniques to increase speed, efficiency, scalability, and reliability of web crawls
  • Use big data processing platform to build data pipelines, publish data, and ensure the reliable availability of data that we crawl
  • Work with our data product and engineering team to design and implement new data products with captured data, and enhance and improve upon existing products

The Technical Chops You’ll Need

  • 7+ years industry experience with clear examples of strategic technical problem solving and implementation
  • Strong software development architecture and fundamentals for backend applications
  • Solid understanding of browser rendering pipeline, web application architecture (auth, cookies, http request/response)
  • Solid programming experience: strong grasp of object-oriented design and experience building applications using asynchronous programming paradigms (e.g., async/await, event loops, or concurrency libraries)
  • Experience building crawlers
  • Proficient in Linux / Unix command line utilities, Linux system administration, architecture, and resource management
  • Experience evaluating data quality and maintaining consistently high data standards across new feature releases (e.g., consistency, accuracy, validity, completeness)

People Thrive Here Who Can

  • Must thrive in a fast paced environment and be able to work independently
  • Can work effectively remotely (able to be proactive about managing blockers, proactive on reaching out and asking questions, and participating in team activities)
  • Strong written communication skills on Slack/Chat and in documents
  • You are experienced in writing data design docs (pipeline design, dataflow, schema design)
  • You can scope and breakdown projects, communicate and collaborate progress and blockers effectively with your manager, team, and stakeholders

Some Nice To Haves

  • Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineering
  • Experience as a Red Teamer
  • Experience working in data acquisition
  • Experience in network architecture and how to debug and inspect network traffic (DNS, IPv4, Proxies, Application ports and interfaces; packet capture and analysis)
  • Experience with Apache Spark
  • Experience with SQL, including writing advanced queries (e.g., window functions, CTEs)
  • Experience with streaming data platforms (e.g. Kafka or other pub/sub; Spark streaming or other stream processing)
  • Experience with cloud computing services (AWS (preferred), GCP, Azure or similar)
  • Experience working in Databricks (including delta live tables, data lakehouse patterns, etc.)
  • Knowledge of modern data design and storage patterns (e.g., incremental updating, partitioning and segmentation, rebuilds and backfills)
  • Experience with data warehousing (e.g., Databricks, Snowflake, Redshift, BigQuery, or similar)
  • Understanding of modern data storage formats and tools (e.g., parquet, ORC, Avro, Delta Lake)

Our Benefits

  • Stock
  • Competitive Salaries
  • Unlimited paid time off
  • Medical, dental, & vision insurance 
  • Health, fitness, and office stipends
  • The permanent ability to work wherever and however you want

Comp: $160K - $200K

People Data Labs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits.

Qualified Applicants with arrest or conviction records will be considered for Employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.

Personal Privacy Policy for California Residents
https://www.peopledatalabs.com/pdf/privacy-policy-and-notice.pdf

Apply now Apply later
Job stats:  3  0  0
Category: Engineering Jobs

Tags: Architecture Avro AWS Azure Big Data BigQuery Computer Science Databricks Dataflow Data pipelines Data quality Data Warehousing Engineering GCP Kafka Linux Mathematics Parquet Pipelines Privacy Redshift Snowflake Spark SQL Statistics Streaming

Perks/benefits: Career development Health care Insurance Team events Unlimited paid time off

Regions: Remote/Anywhere North America
Country: United States

More jobs like this