Data Analyst/Engineer

Frisco, United States

WorldLink US

WorldLink is a leading provider of Data & Analytics services with a global reach and 25 years of experience.

View all jobs at WorldLink US

Apply now Apply later

TITLE: Data Analyst/Data Engineer

POSITION TYPE: Full Time (W2)

LOCATION: Frisco, TX



ABOUT WorldLink:

WorldLink is a rapidly growing information technology company at the forefront of the tech transformation. From custom software development to cloud hosting, from big data to cognitive computing, we help companies harness and leverage today’s most cutting-edge digital technologies to create value and grow.

Collaborative. Respectful. Work hard Play hard. A place to dream and do. These are just a few words that describe what life is like at WorldLink. We embrace a culture of experimentation and constantly strive for improvement and learning.

We take pride in our employees and their future with continued growth and career advancement. We put TEAM first. We are a competitive group that like to win. We're grounded by humility and driven by ambition. We're passionate, and we love tough problems and new challenges. You don't hear a lot of "I don't know how" or "I can't" at WorldLink. If you are passionate about what you do and having fun while doing it; tired of rigid and strict work environments and would like to work in a non-bureaucratic startup cultural environment, WorldLink may be the place for you.

For more information about our craft, visit https://worldlink-us.com .

WHO we’re looking for:

We are looking for a Data Analyst/Engineer who will be responsible for developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies. You should possess strong knowledge and interest across big data technologies and have a strong background in data engineering. You will build data pipeline frameworks to automate high-volume batch and real-time data delivery. Continuously integrate and ship code into our cloud production environments. You will also be responsible for working directly with Product Owners and customers to deliver data products in a collaborative and agile environment. Assist in creating architectures using cloud-native technologies.

Role and Responsibilities:
  • Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories.
  • Build robust, scalable, production-ready data pipelines.
  • Unit test pipelines to ensure high quality.
  • Leverage capabilities of Databricks Delta Lake functionality as needed.
  • Leverage capabilities of Databricks Lakehouse functionality as needed to build Common/Conformed layers within the data lake.
  • Building data APIs and data delivery services to support critical operational and analytical applications
  • Contributing to the design of robust systems with an eye on the long-term maintenance and support of the application.
  • Leveraging reusable code modules to solve problems across the team and organization.
  • Handling multiple functions and roles for the projects and Agile teams.
  • Defining, executing and continuously improving our internal software architecture processes.
Required Experience and Education:

  • At least 4 years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS.
  • 5+ years of developing applications with Monitoring, Build Tools, Version Control, Unit Test, TDD, Change Management to support DevOps.
  • BS degree in Computer Science, Data Engineering or related field.
  • Intermediate to senior level experience in a Data Engineering role. Demonstrated strong execution capabilities.
  • Must have prior data engineering and ETL experience.
  • Demonstrated experience with best Agile Scrum SDLC practices: coding standards, reviews, code management, build processes, and testing.
  • History of successfully developing software following an Agile methodology.
  • Search engine integration and data catalog/metadata store experience is preferred.
  • Deployment of advanced services on the cloud and working with Data Architects to deploy AI/ML and cutting-edge data lakes, warehouses, and pipelines is a plus (especially using Amazon Sage maker).
  • Familiarity with machine learning implementation using PySpark.
  • Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR.
  • Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search.
  • Hands-on experience with leveraging CI/CD to rapidly build & test application code.
  • Expertise in Data governance and Data Quality.
  • Experience working with PCI Data and working with data scientists is a plus.
  • Hands-on experience with any of the following programming languages: PySpark, Python,R, Scala.
Necessary Skills and Attributes:
  • Self-motivated individual with the ability to thrive in a team-based or independent environment.
  • Detail-oriented with strong organization skills.
  • Ability to work in a fast-paced environment.
  • Limited supervision and the exercise of discretion.
  • Excellent oral and written communication skills.
  • Ability to present new ideas, approaches and information clearly.
  • Diligent work ethic and insatiable desire to learn and develop skills.
  • Ability to acquire new knowledge quickly.
  • Strong interpersonal skills.
  • Excellent time management skills.
  • Cultural sensitivity/awareness
  • Successfully complete assessment tests offered in Pluralsight, Udemy, etc. or complete certifications to demonstrate technical expertise on more than one development platform.
Preferred Qualifications:
  • Experience working with a combined in-house and outsourced team.
  • Experience working in a geographically separated team including offshore resources.
  • 1+ years’ experience with other cloud services like Microsoft Azure, Google Compute or others.
  • 2+ years of experience working with Streaming using Spark or Flink or Kafka.
  • 2+ years of experience working with Dimensional Data Model and pipelines.
  • Intermediate level experience/knowledge in at least one scripting language (Python, Perl, JavaScript).
  • Hands-on design experience with data pipelines, joining data between structured and unstructured data.
  • Experience implementing open source frameworks & exposure to various open source & package software architectures (Elastic Search, Spark, Scala, Splunk, Apigee, and Jenkins etc.)
  • Experience with various NoSQL databases (Hive, MongoDB, Couchbase, Cassandra, and Neo4j) will be a plus.
Physical Demands:

The physical demands described here are representative of those that must be met by contract employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

While performing the duties of this job, the contract employee is occasionally required to stand, clean, crawl, kneel, sit, sort, hold, squat, stoop, stand, twist the body, walk, use hands to finger, handle, or feel objects, tools or controls, reach with hands and arms, climb stairs or ladders and scaffolding, talk or hear, and lift up to 20 pounds. Specific vision abilities required by the job include ability to distinguish the nature of objects by using the eye. Operate a computer keyboard and view a video display terminal between 50% - 95% of work time, including prolonged periods of time. Requires considerable (90%+) work utilizing high visual acuity/detail, numeric/character distinction, and moderate hand/finger dexterity.

Performs work under time schedules and stress which are normally periodic or cyclical, including time sensitive deadlines, intellectual challenge, some language barriers, and project management deadlines. May require working additional time beyond normal schedule and periodic travel.

WHAT we’ll bring:

During your interview process, our team can fill you in on all the details of our industry-competitive benefits and career development opportunities. A few highlights include:
  • Medical Plans
  • Dental Plans
  • Vision Plan
  • Life & Accidental Death & Dismemberment
  • Short-Term Disability
  • Long-Term Disability
  • Critical Illness/ Accident/ Hospital Indemnity/ Identity Theft Protection
  • 401(k)
WHAT you should know:

Our success begins and ends with our people. We embrace diverse perspectives and value unique human experiences. WorldLink is an Equal Employment Opportunity and Affirmative Action employer. All employment at WorldLink is decided on the basis of qualifications, merit, and business need. We endeavor to continue our footprint as a diverse organization by highlighting opportunities for all people.  WorldLink considers applicants for all positions without regard to race, color, religion or belief, sex, (including pregnancy and gender identity), age, national origin, political affiliation, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. People with disabilities who need assistance with any part of the application process should contact us.

This job description is designed to cover the main responsibilities and duties of the role but is not designed to be a comprehensive list of all.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  6  0  0

Tags: Agile APIs Architecture Avro AWS AWS Glue Azure Big Data Cassandra CI/CD Computer Science Databricks Data governance Data pipelines Data quality DevOps ECS Engineering ETL Firehose Flink HBase JavaScript Jenkins Kafka Machine Learning MongoDB Neo4j NoSQL Open Source Parquet Perl Pipelines PySpark Python R RDBMS Scala Scrum SDLC Spark Splunk Streaming TDD Testing Unstructured data

Perks/benefits: Career development Health care Startup environment

Region: North America
Country: United States

More jobs like this