- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, ... NVIDIA's inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks, NVIDIA libraries… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, ... and optimize the GPU-accelerated software that powers today's most sophisticated AI applications. Our...inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks,… more
- MongoDB (Palo Alto, CA)
- **About the Role** We're looking for a Senior Engineer to help build the next-generation inference platform that supports embedding models used for semantic ... - fully integrated with Atlas and designed for developer-first experiences. As a Senior Engineer , you'll focus on building core systems and services that power… more
- NVIDIA (CA)
- …see how you can make a lasting impact on the world. We are now looking for a Senior System Software Engineer to work on user facing tools for Dynamo ... Inference Server! NVIDIA is hiring software engineers for its GPU-accelerated deep learning software team, and we are a remote friendly work environment.… more
- NVIDIA (Santa Clara, CA)
- …and motivated software engineers to join us and build AI inference systems that serve large-scale models with extreme efficiency. You'll architect and implement ... high-performance inference stacks, optimize GPU kernels and compilers, drive industry...way to integrate research ideas and prototypes into NVIDIA's software products. What we need to see: + Bachelor's… more
- Red Hat (Boston, MA)
- …for enterprises to build, optimize, and scale LLM deployments. We are seeking an experienced Senior ML Ops engineer to work closely with our product and research ... open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings...teams to scale SOTA deep learning products and software . As an ML Ops engineer , you… more
- NVIDIA (Santa Clara, CA)
- …and eager to work on cutting-edge AI technology? Join NVIDIA's TensorRT team as a Senior Software Engineer , and be at the forefront of technology, enabling ... with teams and stakeholders across the whole hardware and software stack to understand and leverage new features to...models (such as Large Language Models) & frameworks for inference . + Background with C++17. NVIDIA is widely considered… more
- NVIDIA (CA)
- We are now looking for a Senior System Software Engineer to work on Dynamo & Triton Inference Server! NVIDIA is hiring software engineers for its ... What you'll be doing: In this role, you will develop open source software to serve inference of trained AI models running on GPUs. You will balance a variety… more
- Amazon (Seattle, WA)
- …machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is responsible ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium...and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of… more
- Amazon (Seattle, WA)
- …The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's ... with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team… more
- Amazon (Seattle, WA)
- …The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's ... with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team… more
- NVIDIA (Santa Clara, CA)
- …streamlined deployment strategies with open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative ... as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quantization, speculative decoding, sparsity,… more
- NVIDIA (Santa Clara, CA)
- We are now looking for a Senior DL Algorithms Engineer ! NVIDIA is seeking senior engineers who are mindful of performance analysis and optimization to help ... are unafraid to work across all layers of the hardware/ software stack from GPU architecture to Deep Learning Framework...will be doing: + Implement language and multimodal model inference as part of NVIDIA Inference Microservices… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI ... build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on distributed vLLM (https://github.com/vllm-project/) infrastructure in the LLM-D… more
- NVIDIA (Santa Clara, CA)
- … software ecosystem to power AI at scale. We are looking for a Senior Technical Marketing Engineer to join our growing accelerated computing product team. ... ensure a consistent, high-impact go-to-market strategy. This role will focus on AI inference at scale, ensuring that customers and partners understand how to best… more
- Bank of America (Addison, TX)
- Senior Engineer -AI Inference Addison, Texas;Plano, Texas; Newark, Delaware; Charlotte, North Carolina; Kennesaw, Georgia **To proceed with your application, ... must be at least 18 years of age.** Acknowledge (https://ghr.wd1.myworkdayjobs.com/Lateral-US/job/Addison/ Senior - Engineer -AI- Inference \_25029879) **Job Description:** At Bank… more
- Red Hat (Boston, MA)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI ... (https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/#the-top-open-source-projects-by-contributors) on Github. As a Machine Learning Engineer focused on vLLM, you will be… more
- quadric.io, Inc (Burlingame, CA)
- …executes both NN graph code and conventional C++ DSP and control code. Role: The AI Inference Engineer in Quadric is the key bridge between the world of AI/LLM ... general purpose neural processing unit (GPNPU) architecture. Quadric's co-optimized software and hardware is targeted to run neural network...models and Quadric unique platforms. The AI Inference Engineer at Quadric will [1] port… more
- Amazon (Seattle, WA)
- …and the Trn2 and future Trn3 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium...Llama3, GPT OSS, Qwen3, DeepSeek and beyond. The Neuron Inference Technology team works side by side with the… more
- Amazon (Seattle, WA)
- …and the Trn1 and Inf1 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. ... Description AWS Neuron is the complete software stack for the AWS Inferentia and Trainium...and runtime engineers to create, build and tune distributed inference solutions with Trn1. Experience optimizing inference … more