Project Lambda

AI · Reinforcement Learning · Python

Project Lambda

AI Agents Playing Counter-Strike Using Only Visual Input

Overview

Project Lambda builds two AI agents that play Counter-Strike: Global Offensive using only visual input from the screen. Unlike traditional aim-bots that read game memory to get player coordinates, these agents perceive and react to the game exactly as a human would — through pixels alone.

The project compares two fundamentally different AI approaches: classical computer vision with graph-based pathfinding, versus a deep neural network trained end-to-end on professional gameplay footage using behavioural cloning and offline reinforcement learning.

Agents

Agent 1

Vision + Pathfinding

Combines YOLOv7 object detection with A* pathfinding. Detects players and game objects from raw screen pixels, then navigates the map using classical graph search.

YOLOv7A* PathfindingComputer Vision

Agent 2

Behavioural Cloning

A deep neural network trained on professional CS:GO match footage using behavioural cloning and offline reinforcement learning. Operates purely on visual input — no game memory access.

Deep LearningBehavioural CloningOffline RL

Key Aspects

Vision-Only Input

Both agents rely exclusively on screen pixels — no memory reading, no game API hooks. Mirrors how human players perceive the game.

Professional Data Training

Agent 2 was trained on a large dataset of professional CS:GO gameplay footage, learning movement, aim, and positioning strategies.

Behavioural Comparison

The core research question: does an AI built with pathfinding behave more or less realistically than one trained end-to-end from human footage?

OpenAI Gym Integration

Game environment wrapped as an OpenAI Gym interface for standardized agent training and evaluation.

Technology

PythonPyTorchYOLOv7OpenAI GymBehavioural CloningOffline RL