Files
Compass/backend/api
MartinBraquet be22658883 Release
2026-03-19 17:04:18 +01:00
..
2026-02-20 17:32:27 +01:00
Fix
2026-03-02 19:57:31 +01:00
Fix
2026-03-02 19:57:31 +01:00
2026-03-01 04:57:59 +01:00
2026-03-01 04:38:02 +01:00
2026-03-19 17:04:18 +01:00
2026-03-01 06:43:10 +01:00
2026-03-01 04:56:21 +01:00
2026-03-01 04:56:21 +01:00
2026-02-20 17:32:27 +01:00

Backend API

Express.js REST API for Compass, running at https://api.compassmeet.com.

Overview

The API handles:

  • User authentication and management
  • Profile CRUD operations
  • Search and filtering
  • Messaging
  • Notifications
  • Compatibility scoring
  • Events management
  • WebSocket connections for real-time features

Tech Stack

  • Runtime: Node.js 20+
  • Framework: Express.js 5.0
  • Language: TypeScript
  • Database: PostgreSQL (via Supabase)
  • ORM: pg-promise
  • Validation: Zod
  • WebSocket: ws library
  • API Docs: Swagger/OpenAPI

Project Structure

backend/api/
├── src/
│   ├── app.ts             # Express app setup
│   ├── routes.ts          # Route definitions
│   ├── test.ts            # Test utilities
│   ├── get-*.ts           # GET endpoints
│   ├── create-*.ts        # POST endpoints
│   ├── update-*.ts        # PUT/PATCH endpoints
│   ├── delete-*.ts        # DELETE endpoints
│   └── helpers/           # Shared utilities
├── tests/
│   └── unit/              # Unit tests
├── package.json
├── tsconfig.json
└── README.md

Getting Started

Prerequisites

  • Node.js 20.x or later
  • Yarn
  • Access to Supabase project (for database)

Installation

# From root directory
yarn install

You must also have the gcloud CLI.

On macOS:

brew install --cask google-cloud-sdk

On Linux:

sudo apt-get update && sudo apt-get install google-cloud-sdk

Then:

gcloud init
gcloud auth login
gcloud config set project YOUR_PROJECT_ID

You also need opentofu and docker. Try running this (from root) on Linux or macOS for a faster install:

./script/setup.sh

If it doesn't work, you can install them manually (google how to install opentofu and docker for your OS).

Running Locally

# Run all services (web + API)
yarn dev

# Run API only (from backend/api)
cd backend/api
yarn serve

The API runs on http://localhost:8088 when running locally with the full stack.

Testing

# Run unit tests
yarn test

# Run with coverage
yarn test --coverage

Linting

# Check lint
yarn lint

# Fix issues
yarn lint-fix

API Endpoints

Authentication

Method Endpoint Description
POST /create-user Create new user

Users

Method Endpoint Description
GET /get-me Get current user
PUT /update-me Update current user
DELETE /delete-me Delete account

Profiles

Method Endpoint Description
GET /get-profiles List profiles
GET /get-profile Get single profile
POST /create-profile Create profile
PUT /update-profile Update profile
DELETE /delete-profile Delete profile

Messaging

Method Endpoint Description
GET /get-private-messages Get messages
POST /create-private-user-message Send message
PUT /edit-message Edit message
DELETE /delete-message Delete message

Notifications

Method Endpoint Description
GET /get-notifications List notifications
PUT /update-notif-setting Update settings
Method Endpoint Description
GET /search-users Search users
GET /search-location Search by location

Compatibility

Method Endpoint Description
GET /get-compatibility-questions List questions
POST /set-compatibility-answers Submit answers
GET /compatible-profiles Get compatible profiles

Writing Endpoints

1. Define Schema

Add endpoint definition in common/src/api/schema.ts:

const endpoints = {
  myEndpoint: {
    method: 'POST',
    authed: true,
    returns: z.object({
      success: z.boolean(),
      data: z.any(),
    }),
    props: z
      .object({
        userId: z.string(),
        option: z.string().optional(),
      })
      .strict(),
  },
}

2. Implement Handler

Create handler file in backend/api/src/:

import {z} from 'zod'
import {APIHandler} from './helpers/endpoint'

export const myEndpoint: APIHandler<'myEndpoint'> = async (props, auth) => {
  const {userId, option} = props

  // Implementation
  return {
    success: true,
    data: {userId},
  }
}

3. Register Route

Add to routes.ts:

import {myEndpoint} from './my-endpoint'

const handlers = {
  myEndpoint,
  // ...
}

Authentication

Authenticated Endpoints

Use the authed: true schema property. The auth object is passed to the handler:

export const getProfile: APIHandler<'get-profile'> = async (props, auth) => {
  // auth.uid - user ID
  // auth.creds - credentials type
}

Auth Types

  • firebase - Firebase Auth token
  • session - Session-based auth

Database Access

Using pg-promise

import {createSupabaseDirectClient} from 'shared/supabase/init'

const pg = createSupabaseDirectClient()

const result = await pg.oneOrNone<User>('SELECT * FROM users WHERE id = $1', [userId])

Using Supabase Client

But this works only in the front-end.

import {db} from 'web/lib/supabase/db'

const {data, error} = await db.from('profiles').select('*').eq('user_id', userId)

Rate Limiting

The API includes built-in rate limiting:

export const myEndpoint: APIHandler<'myEndpoint'> = withRateLimit(
  async (props, auth) => {
    // Handler implementation
  },
  {
    name: 'my-endpoint',
    limit: 100,
    windowMs: 60 * 1000, // 1 minute
  },
)

Error Handling

Use APIError for consistent error responses:

import {APIError} from './helpers/endpoint'

throw APIError(404, 'User not found')
throw APIError(400, 'Invalid input', {field: 'email'})

Error codes:

  • 400 - Bad Request
  • 401 - Unauthorized
  • 403 - Forbidden
  • 404 - Not Found
  • 429 - Too Many Requests
  • 500 - Internal Server Error

WebSocket

WebSocket connections are handled for real-time features:

// Subscribe to updates
ws.subscribe('user/123', (data) => {
  console.log('User updated:', data)
})

// Unsubscribe
ws.unsubscribe('user/123', callback)

Available topics:

  • user/{userId} - User updates
  • private-user/{userId} - Private user updates
  • message/{channelId} - New messages

Logging

Use the shared logger:

import {log} from 'shared/monitoring/log'

log.info('Processing request', {userId: auth.uid})
log.error('Failed to process', error)

Deployment

Production Deployment

Deployments are automated via GitHub Actions. Push to main triggers deployment:

# Increment version
# Update package.json version
git add package.json
git commit -m "chore: bump version"
git push origin main

Manual Deployment

cd backend/api
./deploy-api.sh prod

Server Access

Run in this directory to connect to the API server running as virtual machine in Google Cloud. You can access logs, files, debug, etc.

# SSH into production server
cd backend/api
./ssh-api.sh prod

Useful commands on server:

sudo journalctl -u konlet-startup --no-pager -ef  # View logs
sudo docker logs -f $(sudo docker ps -alq)        # Container logs
docker exec -it $(sudo docker ps -alq) sh         # Shell access
docker run -it --rm $(docker images -q | head -n 1) sh
docker rmi -f $(docker images -aq)

Environment Variables

Required secrets (set in Google Cloud Secrets Manager):

Variable Description
DATABASE_URL PostgreSQL connection string
FIREBASE_PROJECT_ID Firebase project ID
FIREBASE_PRIVATE_KEY Firebase private key
SUPABASE_SERVICE_KEY Supabase service role key
JWT_SECRET JWT signing secret

Testing

Writing Unit Tests

// tests/unit/my-endpoint.unit.test.ts
import {myEndpoint} from '../my-endpoint'

describe('myEndpoint', () => {
  it('should return success', async () => {
    const result = await myEndpoint({userId: '123'}, mockAuth)
    expect(result.success).toBe(true)
  })
})

Mocking Database

const mockPg = {
  oneOrNone: jest.fn().mockResolvedValue({id: '123'}),
}

API Documentation

Full API docs available at:

Docs are generated from route definitions in app.ts.

See Also

Setup

This section is only for the people who are creating a server from scratch, for instance for a forked project.

One-time commands you may need to run:

gcloud artifacts repositories create builds \
  --repository-format=docker \
  --location=us-west1 \
  --description="Docker images for API"
gcloud auth configure-docker us-west1-docker.pkg.dev
gcloud config set project compass-130ba
gcloud projects add-iam-policy-binding compass-130ba \
  --member="user:YOUR_EMAIL@gmail.com" \
  --role="roles/artifactregistry.writer"
gcloud projects add-iam-policy-binding compass-130ba \
  --member="user:YOUR_EMAIL@gmail.com" \
  --role="roles/storage.objectAdmin"
gsutil mb -l us-west1 gs://compass-130ba-terraform-state
gsutil uniformbucketlevelaccess set on gs://compass-130ba-terraform-state
gsutil iam ch user:YOUR_EMAIL@gmail.com:roles/storage.admin gs://compass-130ba-terraform-state
tofu init
gcloud projects add-iam-policy-binding compass-130ba \
    --member="serviceAccount:253367029065-compute@developer.gserviceaccount.com" \
    --role="roles/secretmanager.secretAccessor"
gcloud run services list
gcloud compute backend-services update api-backend \
  --global \
  --timeout=600s

Set up the saved search notifications job:

gcloud scheduler jobs create http daily-saved-search-notifications \
  --schedule="0 16 * * *" \
  --uri="https://api.compassmeet.com/internal/send-search-notifications" \
  --http-method=POST \
  --headers="x-api-key=<API_KEY>" \
  --time-zone="UTC" \
  --location=us-west1

View it here.

API Deploy CD
gcloud iam service-accounts create ci-deployer \
  --display-name="CI Deployer"
gcloud projects add-iam-policy-binding compass-130ba \
  --member="serviceAccount:ci-deployer@compass-130ba.iam.gserviceaccount.com" \
  --role="roles/artifactregistry.writer"
gcloud projects add-iam-policy-binding compass-130ba \
  --member="serviceAccount:ci-deployer@compass-130ba.iam.gserviceaccount.com" \
  --role="roles/storage.objectAdmin"
gcloud projects add-iam-policy-binding compass-130ba \
  --member="serviceAccount:ci-deployer@compass-130ba.iam.gserviceaccount.com" \
  --role="roles/storage.admin"
gcloud projects add-iam-policy-binding compass-130ba \
  --member="serviceAccount:ci-deployer@compass-130ba.iam.gserviceaccount.com" \
  --role="roles/compute.admin"
gcloud iam service-accounts add-iam-policy-binding \
  253367029065-compute@developer.gserviceaccount.com \
  --member="serviceAccount:ci-deployer@compass-130ba.iam.gserviceaccount.com" \
  --role="roles/iam.serviceAccountUser"
gcloud iam service-accounts keys create keyfile.json --iam-account=ci-deployer@compass-130ba.iam.gserviceaccount.com
DNS
  • After deployment, Terraform assigns a static external IP to this resource.
  • You can get it manually:
gcloud compute addresses describe api-lb-ip-2 --global --format="get(address)"
34.117.20.215

Since Vercel manages your domain (compassmeet.com):

  1. Log in to Vercel dashboard.
  2. Go to Domains → compassmeet.com → Add Record.
  3. Add an A record for your API subdomain:
Type Name Value TTL
A api 34.123.45.67 600 s
  • Name is just the subdomain: apiapi.compassmeet.com.
  • Value is the external IP of the LB from step 1.

Verify connectivity From your local machine:

nslookup api.compassmeet.com
ping -c 3 api.compassmeet.com
curl -I https://api.compassmeet.com
  • nslookup should return the LB IP (34.123.45.67).
  • curl -I should return 200 OK from your service.

If SSL isnt ready (may take 15 mins), check LB logs:

gcloud compute ssl-certificates describe api-lb-cert-2
Secrets management

Secrets are strings that shouldn't be checked into Git (eg API keys, passwords).

Add the secrets for your specific project in Google Cloud Secrets manager, so that the virtual machine can access them.

For Compass, the name of the secrets are in secrets.ts.