• Master’s degree in Computer Science or related field
• At least 3 years of software development experience
• A minimum of 2 years of experience working with distributed systems
• Knowledge in distributed system design, data pipelining, and implementation
• Knowledge in machine learning algorithms
• Knowledge and experience in building large scale applications using various software design patterns and OO design principles
• Experience with Java, Scala, Python.
• Experience with either distributed computing (Hadoop/Spark/Cloud) or parallel processing (CUDA/threads/MPI)
• Expertise in design pattern (UML diagrams) and data modeling of large scale analytic systems
• Experience in research, analysis, and the conversion of large amount of raw collected data and content into new sets of data that is structured and does not reduce data context in order to enable the Productization of new products
• Worked with data warehousing and distributed/parallel processing of large data sets using parallel computing system to map/reduce computation and Linux clusters (e.g. Hadoop/Cloud technologies, HDFS); cluster;
• Experienced in modern development methodology such as Agile, Scrum and SDLC
• Ability to work in a research oriented, fast pace, and highly technical environment
• Quick thinker and a fast learner, collaborative spirit, and excellent communication and interpersonal skills.
• Master’s degree in Computer Science or related field
• At least 3 years of software development experience
• A minimum of 2 years of experience working with distributed systems
• Knowledge in distributed system design, data pipelining, and implementation
** U.S. Citizens and those who are authorized to work independently in the United States are encouraged to apply. We are unable to sponsor at this time.
** All your information will be kept confidential according to EEO guidelines.