- NVIDIA (CA)
- …can make a lasting impact on the world. We are now looking for a Senior System Software Engineer to work on user facing tools for Dynamo Inference Server! ... ECE, or related field (or equivalent experience). + 6+ years of professional software engineering experience. + Strong understanding of modern ML architectures… more
- Amazon (Seattle, WA)
- …principles. - Proficiency in debugging, profiling, and implementing best software engineering practices in large-scale systems. Preferred Qualifications ... at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and...ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement… more
- Amazon (Seattle, WA)
- …The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's ... with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team… more
- MongoDB (Palo Alto, CA)
- …+ 5+ years of experience building backend or infrastructure systems at scale + Strong software engineering skills in languages such as Go, Rust, Python, or C++, ... **About the Role** We're looking for a Senior Engineer to help build the next-generation inference platform that supports embedding models used for semantic… more
- NVIDIA (Santa Clara, CA)
- …need to see: + Bachelor's degree (or equivalent expeience) in Computer Science (CS), Computer Engineering (CE) or Software Engineering (SE) with 7+ years of ... and motivated software engineers to join us and build AI inference systems that serve large-scale models with extreme efficiency. You'll architect and implement… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... NVIDIA's inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks, NVIDIA libraries… more
- Amazon (Cupertino, CA)
- …and multimodal workloads-reliably and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model ... Description AWS Neuron is the software stack powering AWS Inferentia and Trainium machine...Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... NVIDIA's inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks, NVIDIA libraries… more
- NVIDIA (CA)
- We are now looking for a Senior System Software Engineer to work on Dynamo & Triton Inference Server! NVIDIA is hiring software engineers for its ... doing: In this role, you will develop open source software to serve inference of trained AI...equivalent experience + 8+ years in Computer Science, Computer Engineering , or related field + Ability to work in… more
- Amazon (Sunnyvale, CA)
- …Context for inference efficiency. Key job responsibilities * Develop high-performance inference software for a diverse set of neural models, typically in ... Description The Sensory Inference team at AGI is a group of...new and existing systems experience - 1+ years of software development engineer or related occupational experience… more
- Amazon (Seattle, WA)
- …cloud-scale machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium...and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of… more
- MongoDB (Palo Alto, CA)
- We're looking for a Lead Engineer , Inference Platform to join our team building the inference platform for embedding models that power semantic search, ... Atlas and optimized for developer experience. As a Lead Engineer , Inference Platform, you'll be hands-on with...the team **Who You Are** + 8+ years of engineering experience in backend systems, ML infrastructure, or scalable… more
- NVIDIA (Santa Clara, CA)
- …to work on cutting-edge AI technology? Join NVIDIA's TensorRT team as a Senior Software Engineer , and be at the forefront of technology, enabling support in ... + Work closely with teams and stakeholders across the whole hardware and software stack to understand and leverage new features to improve TensorRT's functionality… more
- General Motors (Sunnyvale, CA)
- …job is eligible for relocation assistance.** **About the Team:** The ML Inference Platform is part of the AI Compute Platforms organization within Infrastructure ... of state-of-the-art (SOTA) machine learning models for experimental and bulk inference , with a focus on performance, availability, concurrency, and scalability.… more
- Red Hat (Sacramento, CA)
- The vLLM and LLM-D Engineering team at Red Hat is looking for...developer to join our team as a **Forward Deployed Engineer ** . In this role, you will not just ... build software ; you...; you will be the bridge between our cutting-edge inference platform (LLM-D (https://llm-d.ai/) , and vLLM (https://github.com/vllm-project/vllm) )… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI ... build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on distributed vLLM (https://github.com/vllm-project/) infrastructure in the LLM-D… more
- NVIDIA (Santa Clara, CA)
- …open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and ... as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quantization, speculative decoding, sparsity,… more
- Capital One (San Francisco, CA)
- …developing and applying state-of-the-art techniques for optimizing training and inference software to improve hardware utilization, latency, throughput, ... Lead AI Engineer (FM Hosting, LLM Inference ) **Overview**...engineering and mathematics, and your expertise in hardware, software , and AI enable you to see and exploit… more
- NVIDIA (Santa Clara, CA)
- …blogs, solution briefs, presentations, explainer videos, and demos that highlight NVIDIA's AI inference capabilities. + Engage with Engineering & Product Teams - ... platforms integrate CPUs, GPUs, DPUs, networking, and a full-stack software ecosystem to power AI at scale. We are...scale. We are looking for a Senior Technical Marketing Engineer to join our growing accelerated computing product team.… more
- quadric.io, Inc (Burlingame, CA)
- …executes both NN graph code and conventional C++ DSP and control code. Role: The AI Inference Engineer in Quadric is the key bridge between the world of AI/LLM ... general purpose neural processing unit (GPNPU) architecture. Quadric's co-optimized software and hardware is targeted to run neural network...models and Quadric unique platforms. The AI Inference Engineer at Quadric will [1] port… more