Publish Typst package and add skill for publishing Typst package.

This commit is contained in:
Sina Atalay
2026-03-20 16:15:43 +03:00
parent 4e07fa2380
commit 67c7ccfc84
11 changed files with 1069 additions and 427 deletions

View File

@@ -0,0 +1,195 @@
---
name: publish-typst-package
description: Create a PR to publish a new version of the rendercv-typst package to the Typst Universe (typst/packages repository). Validates package integrity, forks/clones the repo, copies files, and opens a PR.
disable-model-invocation: true
---
# Publish rendercv-typst to Typst Universe
Create a pull request to `typst/packages` to publish the current version of `rendercv-typst/`.
The clone location for the typst/packages fork is `$HOME/.cache/rendercv/typst-packages`.
## Step 1: Read package metadata
Read `rendercv-typst/typst.toml` to get the version and all metadata fields.
## Step 2: Validate package integrity
Run ALL checks below. Collect ALL failures and report them together. Do NOT proceed to Step 3 if any check fails.
### 2a: Required files
Verify these exist in `rendercv-typst/`:
- `lib.typ`
- `typst.toml`
- `README.md`
- `LICENSE`
- `thumbnail.png`
- `template/main.typ`
### 2b: Manifest completeness
Parse `typst.toml` and verify it has:
- Required: `name`, `version`, `entrypoint`, `authors`, `license`, `description`
- Template section: `[template]` with `path`, `entrypoint`, `thumbnail`
### 2c: Version consistency
Check that the version string in `typst.toml` appears correctly in:
- `README.md` import statements (`@preview/rendercv:X.Y.Z`)
- `template/main.typ` import statement (`@preview/rendercv:X.Y.Z`)
- All example files in `rendercv-typst/examples/*.typ` (if they have import statements)
If ANY file references an old version, stop and report which files need updating.
### 2d: CHANGELOG entry
Read `rendercv-typst/CHANGELOG.md` and verify there is an entry for the version being published.
### 2e: All themes have example files
This is critical. Extract all theme names shown in the README by finding image references that match the pattern `examples/<theme-name>.png` in the image URLs. Then verify that EVERY theme has a corresponding `<theme-name>.typ` file in `rendercv-typst/examples/`.
For example, if the README shows images for classic, engineeringresumes, sb2nov, moderncv, engineeringclassic, and harvard, then ALL of these must exist as `.typ` files in `rendercv-typst/examples/`.
If any example file is missing, STOP and tell the user exactly which files are missing.
### 2f: No stale or broken links
Check that the `README.md` does not reference nonexistent files within the package (e.g., broken relative links).
### 2g: Import style in template
Verify `template/main.typ` uses the absolute package import (`@preview/rendercv:{version}`) and NOT a relative import like `../lib.typ`. The Typst packages repository requires absolute imports.
## Step 3: Handle previous work
1. Check for existing open PRs for rendercv in `typst/packages`:
```
gh pr list --repo typst/packages --author @me --search "rendercv" --state all
```
2. If an existing PR is **open**, ask the user what to do:
- Update the existing PR?
- Close it and create a new one?
- Abort?
3. If the clone directory `$HOME/.cache/rendercv/typst-packages` already exists:
- If there are old branches for previous versions that have been merged/closed, delete them.
- Reset to upstream/main before proceeding.
## Step 4: Set up fork and clone
### If clone does NOT exist:
```bash
mkdir -p $HOME/.cache/rendercv
# Fork if not already forked (idempotent)
gh repo fork typst/packages --clone=false
# Clone with sparse checkout
gh repo clone $(gh api user --jq .login)/packages $HOME/.cache/rendercv/typst-packages -- --filter=blob:none --sparse
cd $HOME/.cache/rendercv/typst-packages
git sparse-checkout set packages/preview/rendercv
git remote add upstream https://github.com/typst/packages.git 2>/dev/null || true
git fetch upstream main
```
### If clone ALREADY exists:
```bash
cd $HOME/.cache/rendercv/typst-packages
git fetch upstream main
git checkout main
git reset --hard upstream/main
```
## Step 5: Create the package version directory
1. Read the version from `typst.toml` (e.g., `0.3.0`).
2. Create a new branch: `git checkout -b rendercv-{version}`
3. Create the target directory: `packages/preview/rendercv/{version}/`
4. Copy files from the rendercv-typst source directory into the target:
**Files to copy:**
- `lib.typ`
- `typst.toml`
- `README.md`
- `LICENSE`
- `thumbnail.png`
- `template/` (entire directory)
- `examples/` (entire directory, but exclude any `.pdf` files)
**Do NOT copy:**
- `CHANGELOG.md`
- `.git/` or `.gitignore`
- Any `.pdf` files
5. Verify no PDF files ended up in the target directory.
## Step 6: Determine previous version
Look at existing directories in `packages/preview/rendercv/` to find the most recent previous version. This is needed for the PR description. If no previous version exists (first submission), note that this is a new package.
## Step 7: Build PR description
Read `rendercv-typst/CHANGELOG.md` and extract the changes for the current version.
**PR title:** `rendercv:{version}`
**PR body for updates:**
```
I am submitting
- [ ] a new package
- [x] an update for a package
Description: {Brief description of the package}. {Summary of what changed in this version}.
### Changes from {previous_version}
{Bullet list of changes extracted from CHANGELOG.md}
```
**PR body for new packages** (if no previous version exists, include the full checklist):
```
I am submitting
- [x] a new package
- [ ] an update for a package
Description: {Description from typst.toml}
I have read and followed the submission guidelines and, in particular, I
- [x] selected a name that isn't the most obvious or canonical name for what the package does
- [x] added a `typst.toml` file with all required keys
- [x] added a `README.md` with documentation for my package
- [x] have chosen a license and added a `LICENSE` file or linked one in my `README.md`
- [x] tested my package locally on my system and it worked
- [x] `exclude`d PDFs or README images, if any, but not the LICENSE
- [x] ensured that my package is licensed such that users can use and distribute the contents of its template directory without restriction, after modifying them through normal use.
```
## Step 8: Commit, push, and create PR
```bash
cd $HOME/.cache/rendercv/typst-packages
git add packages/preview/rendercv/{version}/
git commit -m "rendercv:{version}"
git push -u origin rendercv-{version}
```
Create the PR:
```bash
gh pr create \
--repo typst/packages \
--base main \
--title "rendercv:{version}" \
--body "..." # Use the body from Step 7
```
## Step 9: Report results
Tell the user:
1. The PR URL (clickable)
2. The clone location (`$HOME/.cache/rendercv/typst-packages`)
3. The branch name (`rendercv-{version}`)
4. Any warnings noticed during validation (even if they didn't block the PR)

View File

@@ -4,14 +4,28 @@ All notable changes to the RenderCV **Typst package** (`@preview/rendercv`) will
For the changelog of the RenderCV CLI and Python package, see [the RenderCV changelog](https://docs.rendercv.com/changelog/).
## 0.2.0 - 2025-02-16
## 0.3.0 - 2026-03-20
### Added
- Four new centered section title styles: `centered_without_line`, `centered_with_partial_line`, `centered_with_centered_partial_line`, and `centered_with_full_line`.
- Harvard theme example (`examples/harvard.typ`).
## 0.2.0 - 2026-02-16
### Added
- RTL (right-to-left) language support via `text-direction` parameter (accepts native Typst `ltr`/`rtl` values). All layout elements (grids, insets, section titles, top note) mirror correctly for RTL languages.
- `title` parameter to customize the PDF document title.
- `entries-degree-width` parameter to control the width of the degree column in education entries.
- Persian RTL example (`examples/rtl.typ`).
### Fixed
- Correct spacing when a headline is present. Previously, `header-space-below-headline` was ignored when a headline existed.
- Empty second line detection in education entries.
- External link icon rendering issues.
## 0.1.0 - 2025-12-05
- Initial release of RenderCV Typst package.

View File

@@ -4,18 +4,18 @@ All six looks below are produced by the same package with different parameter va
<table>
<tr>
<td><img alt="Classic" src="https://raw.githubusercontent.com/rendercv/rendercv/main/docs/assets/images/examples/classic.png" width="350"></td>
<td><img alt="Engineering Resumes" src="https://raw.githubusercontent.com/rendercv/rendercv/main/docs/assets/images/examples/engineeringresumes.png" width="350"></td>
<td><img alt="Sb2nov" src="https://raw.githubusercontent.com/rendercv/rendercv/main/docs/assets/images/examples/sb2nov.png" width="350"></td>
<td><img alt="Example CV using the Classic theme with blue accents and partial section title lines" src="https://raw.githubusercontent.com/rendercv/rendercv/9b7830a0e1b5d731461320c10df0a9c12267e5f0/docs/assets/images/examples/classic.png" width="350"></td>
<td><img alt="Example CV using the Engineering Resumes theme with a minimal single-column layout" src="https://raw.githubusercontent.com/rendercv/rendercv/9b7830a0e1b5d731461320c10df0a9c12267e5f0/docs/assets/images/examples/engineeringresumes.png" width="350"></td>
<td><img alt="Example CV using the Sb2nov theme with full-width section title lines" src="https://raw.githubusercontent.com/rendercv/rendercv/9b7830a0e1b5d731461320c10df0a9c12267e5f0/docs/assets/images/examples/sb2nov.png" width="350"></td>
</tr>
<tr>
<td><img alt="ModernCV" src="https://raw.githubusercontent.com/rendercv/rendercv/main/docs/assets/images/examples/moderncv.png" width="350"></td>
<td><img alt="Engineering Classic" src="https://raw.githubusercontent.com/rendercv/rendercv/main/docs/assets/images/examples/engineeringclassic.png" width="350"></td>
<td><img alt="Harvard" src="https://raw.githubusercontent.com/rendercv/rendercv/main/docs/assets/images/examples/harvard.png" width="350"></td>
<td><img alt="Example CV using the ModernCV theme with a sidebar layout and colored name" src="https://raw.githubusercontent.com/rendercv/rendercv/9b7830a0e1b5d731461320c10df0a9c12267e5f0/docs/assets/images/examples/moderncv.png" width="350"></td>
<td><img alt="Example CV using the Engineering Classic theme with a traditional academic style" src="https://raw.githubusercontent.com/rendercv/rendercv/9b7830a0e1b5d731461320c10df0a9c12267e5f0/docs/assets/images/examples/engineeringclassic.png" width="350"></td>
<td><img alt="Example CV using the Harvard theme with a clean serif font and full-width lines" src="https://raw.githubusercontent.com/rendercv/rendercv/9b7830a0e1b5d731461320c10df0a9c12267e5f0/docs/assets/images/examples/harvard.png" width="350"></td>
</tr>
</table>
See the [examples](https://github.com/rendercv/rendercv-typst/tree/main/examples) directory for the full source of each.
See the [examples](examples/) directory for the full source of each.
## Getting Started
@@ -126,7 +126,7 @@ Everything is customizable through `rendercv.with()`. A few examples:
)
```
For the full list of parameters with defaults, see [`lib.typ`](https://github.com/rendercv/rendercv-typst/blob/main/lib.typ).
For the full list of parameters with defaults, see [`lib.typ`](lib.typ).
## RenderCV

View File

@@ -4,9 +4,11 @@
// Apply the rendercv template with custom configuration
#show: rendercv.with(
name: "John Doe",
title: "John Doe - CV",
footer: context { [#emph[John Doe -- #str(here().page())\/#str(counter(page).final().first())]] },
top-note: [ #emph[Last updated in Dec 2025] ],
top-note: [ #emph[Last updated in Mar 2026] ],
locale-catalog-language: "en",
text-direction: ltr,
page-size: "us-letter",
page-top-margin: 0.7in,
page-bottom-margin: 0.7in,
@@ -67,6 +69,7 @@
entries-space-between-columns: 0.1cm,
entries-allow-page-break: false,
entries-short-second-row: true,
entries-degree-width: 1cm,
entries-summary-space-left: 0cm,
entries-summary-space-above: 0cm,
entries-highlights-bullet: "•" ,
@@ -76,9 +79,9 @@
entries-highlights-space-between-items: 0cm,
entries-highlights-space-between-bullet-and-text: 0.5em,
date: datetime(
year: 2025,
month: 12,
day: 5,
year: 2026,
month: 3,
day: 20,
),
)
@@ -98,26 +101,30 @@
RenderCV reads a CV written in a YAML file, and generates a PDF with professional typography.
See the #link("https://docs.rendercv.com")[documentation] for more details.
Each section title is arbitrary.
You can choose any of the 9 entry types for each section.
Markdown syntax is supported everywhere. This is #strong[bold], #emph[italic], and #link("https://example.com")[link].
== Education
#education-entry(
[
#strong[Princeton University], Computer Science
- Thesis: Efficient Neural Architecture Search for Resource-Constrained Deployment
- Advisor: Prof. Sanjeev Arora
- NSF Graduate Research Fellowship, Siebel Scholar (Class of 2022)
],
[
Princeton, NJ
Sept 2018 May 2023
],
degree-column: [
#strong[PhD]
@@ -127,17 +134,17 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#education-entry(
[
#strong[Boğaziçi University], Computer Engineering
- GPA: 3.97\/4.00, Valedictorian
- Fulbright Scholarship recipient for graduate studies
- Fulbright Scholarship recipient for Graduate Studies
],
[
Istanbul, Türkiye
Sept 2014 June 2018
],
degree-column: [
#strong[BS]
@@ -149,105 +156,105 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Nexus AI], Co-Founder & CTO
- Built foundation model infrastructure serving 2M+ monthly API requests with 99.97\% uptime
- Raised \$18M Series A led by Sequoia Capital, with participation from a16z and Founders Fund
- Scaled engineering team from 3 to 28 across ML research, platform, and applied AI divisions
- Developed proprietary inference optimization reducing latency by 73\% compared to baseline
],
[
San Francisco, CA
June 2023 present
2 years 7 months
2 years 10 months
],
)
#regular-entry(
[
#strong[NVIDIA Research], Research Intern
- Designed sparse attention mechanism reducing transformer memory footprint by 4.2x
- Co-authored paper accepted at NeurIPS 2022 (spotlight presentation, top 5\% of submissions)
],
[
Santa Clara, CA
May 2022 Aug 2022
4 months
],
)
#regular-entry(
[
#strong[Google DeepMind], Research Intern
- Developed reinforcement learning algorithms for multi-agent coordination
- Published research at top-tier venues with significant academic impact
- ICML 2022 main conference paper, cited 340+ times within two years
- NeurIPS 2022 workshop paper on emergent communication protocols
- Invited journal extension in JMLR (2023)
],
[
London, UK
May 2021 Aug 2021
4 months
],
)
#regular-entry(
[
#strong[Apple ML Research], Research Intern
- Created on-device neural network compression pipeline deployed across 50M+ devices
- Filed 2 patents on efficient model quantization techniques for edge inference
],
[
Cupertino, CA
May 2020 Aug 2020
4 months
],
)
#regular-entry(
[
#strong[Microsoft Research], Research Intern
- Implemented novel self-supervised learning framework for low-resource language modeling
- Research integrated into Azure Cognitive Services, reducing training data requirements by 60\%
],
[
Redmond, WA
May 2019 Aug 2019
4 months
],
)
@@ -256,34 +263,34 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[#link("https://github.com/")[FlashInfer]]
#summary[Open-source library for high-performance LLM inference kernels]
- Achieved 2.8x speedup over baseline attention implementations on A100 GPUs
- Adopted by 3 major AI labs, 8,500+ GitHub stars, 200+ contributors
],
[
Jan 2023 present
],
)
#regular-entry(
[
#strong[#link("https://github.com/")[NeuralPrune]]
#summary[Automated neural network pruning toolkit with differentiable masks]
- Reduced model size by 90\% with less than 1\% accuracy degradation on ImageNet
- Featured in PyTorch ecosystem tools, 4,200+ GitHub stars
],
[
Jan 2021
],
)
@@ -292,60 +299,60 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Sparse Mixture-of-Experts at Scale: Efficient Routing for Trillion-Parameter Models]
#emph[John Doe], Sarah Williams, David Park
#link("https://doi.org/10.1234/neurips.2023.1234")[10.1234\/neurips.2023.1234] (NeurIPS 2023)
],
[
July 2023
],
)
#regular-entry(
[
#strong[Neural Architecture Search via Differentiable Pruning]
James Liu, #emph[John Doe]
#link("https://doi.org/10.1234/neurips.2022.5678")[10.1234\/neurips.2022.5678] (NeurIPS 2022, Spotlight)
],
[
Dec 2022
],
)
#regular-entry(
[
#strong[Multi-Agent Reinforcement Learning with Emergent Communication]
Maria Garcia, #emph[John Doe], Tom Anderson
#link("https://doi.org/10.1234/icml.2022.9012")[10.1234\/icml.2022.9012] (ICML 2022)
],
[
July 2022
],
)
#regular-entry(
[
#strong[On-Device Model Compression via Learned Quantization]
#emph[John Doe], Kevin Wu
#link("https://doi.org/10.1234/iclr.2021.3456")[10.1234\/iclr.2021.3456] (ICLR 2021, Best Paper Award)
],
[
May 2021
],
)
@@ -393,15 +400,3 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
+ Efficient Deep Learning: A Practitioner's Perspective Google Tech Talk (2022)
],
)
== Any Section Title
You can use any section title you want.
You can choose any entry type for the section: `TextEntry`, `ExperienceEntry`, `EducationEntry`, `PublicationEntry`, `BulletEntry`, `NumberedEntry`, or `ReversedNumberedEntry`.
Markdown syntax is supported everywhere.
The `design` field in YAML gives you control over almost any aspect of your CV design.
See the #link("https://docs.rendercv.com")[documentation] for more details.

View File

@@ -4,9 +4,11 @@
// Apply the rendercv template with custom configuration
#show: rendercv.with(
name: "John Doe",
title: "John Doe - CV",
footer: context { [#emph[John Doe -- #str(here().page())\/#str(counter(page).final().first())]] },
top-note: [ #emph[Last updated in Dec 2025] ],
top-note: [ #emph[Last updated in Mar 2026] ],
locale-catalog-language: "en",
text-direction: ltr,
page-size: "us-letter",
page-top-margin: 0.7in,
page-bottom-margin: 0.7in,
@@ -67,6 +69,7 @@
entries-space-between-columns: 0.1cm,
entries-allow-page-break: false,
entries-short-second-row: false,
entries-degree-width: 1cm,
entries-summary-space-left: 0cm,
entries-summary-space-above: 0.12cm,
entries-highlights-bullet: "•" ,
@@ -76,9 +79,9 @@
entries-highlights-space-between-items: 0.12cm,
entries-highlights-space-between-bullet-and-text: 0.5em,
date: datetime(
year: 2025,
month: 12,
day: 5,
year: 2026,
month: 3,
day: 20,
),
)
@@ -98,43 +101,47 @@
RenderCV reads a CV written in a YAML file, and generates a PDF with professional typography.
See the #link("https://docs.rendercv.com")[documentation] for more details.
Each section title is arbitrary.
You can choose any of the 9 entry types for each section.
Markdown syntax is supported everywhere. This is #strong[bold], #emph[italic], and #link("https://example.com")[link].
== Education
#education-entry(
[
#strong[Princeton University], PhD in Computer Science -- Princeton, NJ
],
[
Sept 2018 May 2023
],
main-column-second-row: [
- Thesis: Efficient Neural Architecture Search for Resource-Constrained Deployment
- Advisor: Prof. Sanjeev Arora
- NSF Graduate Research Fellowship, Siebel Scholar (Class of 2022)
],
)
#education-entry(
[
#strong[Boğaziçi University], BS in Computer Engineering -- Istanbul, Türkiye
],
[
Sept 2014 June 2018
],
main-column-second-row: [
- GPA: 3.97\/4.00, Valedictorian
- Fulbright Scholarship recipient for graduate studies
- Fulbright Scholarship recipient for Graduate Studies
],
)
@@ -143,95 +150,95 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Co-Founder & CTO], Nexus AI -- San Francisco, CA
],
[
June 2023 present
],
main-column-second-row: [
- Built foundation model infrastructure serving 2M+ monthly API requests with 99.97\% uptime
- Raised \$18M Series A led by Sequoia Capital, with participation from a16z and Founders Fund
- Scaled engineering team from 3 to 28 across ML research, platform, and applied AI divisions
- Developed proprietary inference optimization reducing latency by 73\% compared to baseline
],
)
#regular-entry(
[
#strong[Research Intern], NVIDIA Research -- Santa Clara, CA
],
[
May 2022 Aug 2022
],
main-column-second-row: [
- Designed sparse attention mechanism reducing transformer memory footprint by 4.2x
- Co-authored paper accepted at NeurIPS 2022 (spotlight presentation, top 5\% of submissions)
],
)
#regular-entry(
[
#strong[Research Intern], Google DeepMind -- London, UK
],
[
May 2021 Aug 2021
],
main-column-second-row: [
- Developed reinforcement learning algorithms for multi-agent coordination
- Published research at top-tier venues with significant academic impact
- ICML 2022 main conference paper, cited 340+ times within two years
- NeurIPS 2022 workshop paper on emergent communication protocols
- Invited journal extension in JMLR (2023)
],
)
#regular-entry(
[
#strong[Research Intern], Apple ML Research -- Cupertino, CA
],
[
May 2020 Aug 2020
],
main-column-second-row: [
- Created on-device neural network compression pipeline deployed across 50M+ devices
- Filed 2 patents on efficient model quantization techniques for edge inference
],
)
#regular-entry(
[
#strong[Research Intern], Microsoft Research -- Redmond, WA
],
[
May 2019 Aug 2019
],
main-column-second-row: [
- Implemented novel self-supervised learning framework for low-resource language modeling
- Research integrated into Azure Cognitive Services, reducing training data requirements by 60\%
],
)
@@ -240,38 +247,38 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[#link("https://github.com/")[FlashInfer]]
],
[
Jan 2023 present
],
main-column-second-row: [
#summary[Open-source library for high-performance LLM inference kernels]
- Achieved 2.8x speedup over baseline attention implementations on A100 GPUs
- Adopted by 3 major AI labs, 8,500+ GitHub stars, 200+ contributors
],
)
#regular-entry(
[
#strong[#link("https://github.com/")[NeuralPrune]]
],
[
Jan 2021
],
main-column-second-row: [
#summary[Automated neural network pruning toolkit with differentiable masks]
- Reduced model size by 90\% with less than 1\% accuracy degradation on ImageNet
- Featured in PyTorch ecosystem tools, 4,200+ GitHub stars
],
)
@@ -280,68 +287,68 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Sparse Mixture-of-Experts at Scale: Efficient Routing for Trillion-Parameter Models]
],
[
July 2023
],
main-column-second-row: [
#emph[John Doe], Sarah Williams, David Park
#link("https://doi.org/10.1234/neurips.2023.1234")[10.1234\/neurips.2023.1234] (NeurIPS 2023)
],
)
#regular-entry(
[
#strong[Neural Architecture Search via Differentiable Pruning]
],
[
Dec 2022
],
main-column-second-row: [
James Liu, #emph[John Doe]
#link("https://doi.org/10.1234/neurips.2022.5678")[10.1234\/neurips.2022.5678] (NeurIPS 2022, Spotlight)
],
)
#regular-entry(
[
#strong[Multi-Agent Reinforcement Learning with Emergent Communication]
],
[
July 2022
],
main-column-second-row: [
Maria Garcia, #emph[John Doe], Tom Anderson
#link("https://doi.org/10.1234/icml.2022.9012")[10.1234\/icml.2022.9012] (ICML 2022)
],
)
#regular-entry(
[
#strong[On-Device Model Compression via Learned Quantization]
],
[
May 2021
],
main-column-second-row: [
#emph[John Doe], Kevin Wu
#link("https://doi.org/10.1234/iclr.2021.3456")[10.1234\/iclr.2021.3456] (ICLR 2021, Best Paper Award)
],
)
@@ -389,15 +396,3 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
+ Efficient Deep Learning: A Practitioner's Perspective Google Tech Talk (2022)
],
)
== Any Section Title
You can use any section title you want.
You can choose any entry type for the section: `TextEntry`, `ExperienceEntry`, `EducationEntry`, `PublicationEntry`, `BulletEntry`, `NumberedEntry`, or `ReversedNumberedEntry`.
Markdown syntax is supported everywhere.
The `design` field in YAML gives you control over almost any aspect of your CV design.
See the #link("https://docs.rendercv.com")[documentation] for more details.

View File

@@ -4,9 +4,11 @@
// Apply the rendercv template with custom configuration
#show: rendercv.with(
name: "John Doe",
title: "John Doe - CV",
footer: context { [#emph[John Doe -- #str(here().page())\/#str(counter(page).final().first())]] },
top-note: [ #emph[Last updated in Dec 2025] ],
top-note: [ #emph[Last updated in Mar 2026] ],
locale-catalog-language: "en",
text-direction: ltr,
page-size: "us-letter",
page-top-margin: 0.7in,
page-bottom-margin: 0.7in,
@@ -67,6 +69,7 @@
entries-space-between-columns: 0.1cm,
entries-allow-page-break: false,
entries-short-second-row: false,
entries-degree-width: 1cm,
entries-summary-space-left: 0cm,
entries-summary-space-above: 0.08cm,
entries-highlights-bullet: text(13pt, [], baseline: -0.6pt) ,
@@ -76,9 +79,9 @@
entries-highlights-space-between-items: 0.08cm,
entries-highlights-space-between-bullet-and-text: 0.3em,
date: datetime(
year: 2025,
month: 12,
day: 5,
year: 2026,
month: 3,
day: 20,
),
)
@@ -98,43 +101,47 @@
RenderCV reads a CV written in a YAML file, and generates a PDF with professional typography.
See the #link("https://docs.rendercv.com")[documentation] for more details.
Each section title is arbitrary.
You can choose any of the 9 entry types for each section.
Markdown syntax is supported everywhere. This is #strong[bold], #emph[italic], and #link("https://example.com")[link].
== Education
#education-entry(
[
#strong[Princeton University], PhD in Computer Science -- Princeton, NJ
],
[
Sept 2018 May 2023
],
main-column-second-row: [
- Thesis: Efficient Neural Architecture Search for Resource-Constrained Deployment
- Advisor: Prof. Sanjeev Arora
- NSF Graduate Research Fellowship, Siebel Scholar (Class of 2022)
],
)
#education-entry(
[
#strong[Boğaziçi University], BS in Computer Engineering -- Istanbul, Türkiye
],
[
Sept 2014 June 2018
],
main-column-second-row: [
- GPA: 3.97\/4.00, Valedictorian
- Fulbright Scholarship recipient for graduate studies
- Fulbright Scholarship recipient for Graduate Studies
],
)
@@ -143,95 +150,95 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Co-Founder & CTO], Nexus AI -- San Francisco, CA
],
[
June 2023 present
],
main-column-second-row: [
- Built foundation model infrastructure serving 2M+ monthly API requests with 99.97\% uptime
- Raised \$18M Series A led by Sequoia Capital, with participation from a16z and Founders Fund
- Scaled engineering team from 3 to 28 across ML research, platform, and applied AI divisions
- Developed proprietary inference optimization reducing latency by 73\% compared to baseline
],
)
#regular-entry(
[
#strong[Research Intern], NVIDIA Research -- Santa Clara, CA
],
[
May 2022 Aug 2022
],
main-column-second-row: [
- Designed sparse attention mechanism reducing transformer memory footprint by 4.2x
- Co-authored paper accepted at NeurIPS 2022 (spotlight presentation, top 5\% of submissions)
],
)
#regular-entry(
[
#strong[Research Intern], Google DeepMind -- London, UK
],
[
May 2021 Aug 2021
],
main-column-second-row: [
- Developed reinforcement learning algorithms for multi-agent coordination
- Published research at top-tier venues with significant academic impact
- ICML 2022 main conference paper, cited 340+ times within two years
- NeurIPS 2022 workshop paper on emergent communication protocols
- Invited journal extension in JMLR (2023)
],
)
#regular-entry(
[
#strong[Research Intern], Apple ML Research -- Cupertino, CA
],
[
May 2020 Aug 2020
],
main-column-second-row: [
- Created on-device neural network compression pipeline deployed across 50M+ devices
- Filed 2 patents on efficient model quantization techniques for edge inference
],
)
#regular-entry(
[
#strong[Research Intern], Microsoft Research -- Redmond, WA
],
[
May 2019 Aug 2019
],
main-column-second-row: [
- Implemented novel self-supervised learning framework for low-resource language modeling
- Research integrated into Azure Cognitive Services, reducing training data requirements by 60\%
],
)
@@ -240,38 +247,38 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[#link("https://github.com/")[FlashInfer]]
],
[
Jan 2023 present
],
main-column-second-row: [
#summary[Open-source library for high-performance LLM inference kernels]
- Achieved 2.8x speedup over baseline attention implementations on A100 GPUs
- Adopted by 3 major AI labs, 8,500+ GitHub stars, 200+ contributors
],
)
#regular-entry(
[
#strong[#link("https://github.com/")[NeuralPrune]]
],
[
Jan 2021
],
main-column-second-row: [
#summary[Automated neural network pruning toolkit with differentiable masks]
- Reduced model size by 90\% with less than 1\% accuracy degradation on ImageNet
- Featured in PyTorch ecosystem tools, 4,200+ GitHub stars
],
)
@@ -280,68 +287,68 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Sparse Mixture-of-Experts at Scale: Efficient Routing for Trillion-Parameter Models]
],
[
July 2023
],
main-column-second-row: [
#emph[John Doe], Sarah Williams, David Park
#link("https://doi.org/10.1234/neurips.2023.1234")[10.1234\/neurips.2023.1234] (NeurIPS 2023)
],
)
#regular-entry(
[
#strong[Neural Architecture Search via Differentiable Pruning]
],
[
Dec 2022
],
main-column-second-row: [
James Liu, #emph[John Doe]
#link("https://doi.org/10.1234/neurips.2022.5678")[10.1234\/neurips.2022.5678] (NeurIPS 2022, Spotlight)
],
)
#regular-entry(
[
#strong[Multi-Agent Reinforcement Learning with Emergent Communication]
],
[
July 2022
],
main-column-second-row: [
Maria Garcia, #emph[John Doe], Tom Anderson
#link("https://doi.org/10.1234/icml.2022.9012")[10.1234\/icml.2022.9012] (ICML 2022)
],
)
#regular-entry(
[
#strong[On-Device Model Compression via Learned Quantization]
],
[
May 2021
],
main-column-second-row: [
#emph[John Doe], Kevin Wu
#link("https://doi.org/10.1234/iclr.2021.3456")[10.1234\/iclr.2021.3456] (ICLR 2021, Best Paper Award)
],
)
@@ -389,15 +396,3 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
+ Efficient Deep Learning: A Practitioner's Perspective Google Tech Talk (2022)
],
)
== Any Section Title
You can use any section title you want.
You can choose any entry type for the section: `TextEntry`, `ExperienceEntry`, `EducationEntry`, `PublicationEntry`, `BulletEntry`, `NumberedEntry`, or `ReversedNumberedEntry`.
Markdown syntax is supported everywhere.
The `design` field in YAML gives you control over almost any aspect of your CV design.
See the #link("https://docs.rendercv.com")[documentation] for more details.

View File

@@ -0,0 +1,404 @@
// Import the rendercv function and all the refactored components
#import "@preview/rendercv:0.3.0": *
// Apply the rendercv template with custom configuration
#show: rendercv.with(
name: "John Doe",
title: "John Doe - CV",
footer: context { [#emph[John Doe -- #str(here().page())\/#str(counter(page).final().first())]] },
top-note: [ #emph[Last updated in Mar 2026] ],
locale-catalog-language: "en",
text-direction: ltr,
page-size: "us-letter",
page-top-margin: 0.5in,
page-bottom-margin: 0.5in,
page-left-margin: 0.5in,
page-right-margin: 0.5in,
page-show-footer: true,
page-show-top-note: false,
colors-body: rgb(0, 0, 0),
colors-name: rgb(0, 0, 0),
colors-headline: rgb(0, 0, 0),
colors-connections: rgb(0, 0, 0),
colors-section-titles: rgb(0, 0, 0),
colors-links: rgb(0, 0, 0),
colors-footer: rgb(128, 128, 128),
colors-top-note: rgb(128, 128, 128),
typography-line-spacing: 0.6em,
typography-alignment: "justified",
typography-date-and-location-column-alignment: right,
typography-font-family-body: "XCharter",
typography-font-family-name: "XCharter",
typography-font-family-headline: "XCharter",
typography-font-family-connections: "XCharter",
typography-font-family-section-titles: "XCharter",
typography-font-size-body: 10pt,
typography-font-size-name: 25pt,
typography-font-size-headline: 10pt,
typography-font-size-connections: 9pt,
typography-font-size-section-titles: 1.3em,
typography-small-caps-name: false,
typography-small-caps-headline: false,
typography-small-caps-connections: false,
typography-small-caps-section-titles: false,
typography-bold-name: true,
typography-bold-headline: false,
typography-bold-connections: false,
typography-bold-section-titles: true,
links-underline: false,
links-show-external-link-icon: false,
header-alignment: center,
header-photo-width: 3.5cm,
header-space-below-name: 0.5cm,
header-space-below-headline: 0.5cm,
header-space-below-connections: 0.5cm,
header-connections-hyperlink: true,
header-connections-show-icons: false,
header-connections-display-urls-instead-of-usernames: false,
header-connections-separator: "•",
header-connections-space-between-connections: 0.4cm,
section-titles-type: "with_full_line",
section-titles-line-thickness: 0.5pt,
section-titles-space-above: 0.5cm,
section-titles-space-below: 0.2cm,
sections-allow-page-break: true,
sections-space-between-text-based-entries: 0.3em,
sections-space-between-regular-entries: 1em,
entries-date-and-location-width: 4.15cm,
entries-side-space: 0.2cm,
entries-space-between-columns: 0.1cm,
entries-allow-page-break: false,
entries-short-second-row: false,
entries-degree-width: 1cm,
entries-summary-space-left: 0cm,
entries-summary-space-above: 0cm,
entries-highlights-bullet: "•" ,
entries-highlights-nested-bullet: "•" ,
entries-highlights-space-left: 0.15cm,
entries-highlights-space-above: 0cm,
entries-highlights-space-between-items: 0cm,
entries-highlights-space-between-bullet-and-text: 0.5em,
date: datetime(
year: 2026,
month: 3,
day: 20,
),
)
= John Doe
#connections(
[San Francisco, CA],
[#link("mailto:john.doe@email.com", icon: false, if-underline: false, if-color: false)[john.doe\@email.com]],
[#link("https://rendercv.com/", icon: false, if-underline: false, if-color: false)[rendercv.com]],
[#link("https://linkedin.com/in/rendercv", icon: false, if-underline: false, if-color: false)[rendercv]],
[#link("https://github.com/rendercv", icon: false, if-underline: false, if-color: false)[rendercv]],
)
== Welcome to RenderCV
RenderCV reads a CV written in a YAML file, and generates a PDF with professional typography.
Each section title is arbitrary.
You can choose any of the 9 entry types for each section.
Markdown syntax is supported everywhere. This is #strong[bold], #emph[italic], and #link("https://example.com")[link].
== Education
#education-entry(
[
#strong[Princeton University], PhD in Computer Science -- Princeton, NJ
],
[
Sept 2018 May 2023
],
degree-column: [
#strong[PhD]
],
main-column-second-row: [
- Thesis: Efficient Neural Architecture Search for Resource-Constrained Deployment
- Advisor: Prof. Sanjeev Arora
- NSF Graduate Research Fellowship, Siebel Scholar (Class of 2022)
],
)
#education-entry(
[
#strong[Boğaziçi University], BS in Computer Engineering -- Istanbul, Türkiye
],
[
Sept 2014 June 2018
],
degree-column: [
#strong[BS]
],
main-column-second-row: [
- GPA: 3.97\/4.00, Valedictorian
- Fulbright Scholarship recipient for Graduate Studies
],
)
== Experience
#regular-entry(
[
#strong[Nexus AI], Co-Founder & CTO -- San Francisco, CA
],
[
June 2023 present
],
main-column-second-row: [
- Built foundation model infrastructure serving 2M+ monthly API requests with 99.97\% uptime
- Raised \$18M Series A led by Sequoia Capital, with participation from a16z and Founders Fund
- Scaled engineering team from 3 to 28 across ML research, platform, and applied AI divisions
- Developed proprietary inference optimization reducing latency by 73\% compared to baseline
],
)
#regular-entry(
[
#strong[NVIDIA Research], Research Intern -- Santa Clara, CA
],
[
May 2022 Aug 2022
],
main-column-second-row: [
- Designed sparse attention mechanism reducing transformer memory footprint by 4.2x
- Co-authored paper accepted at NeurIPS 2022 (spotlight presentation, top 5\% of submissions)
],
)
#regular-entry(
[
#strong[Google DeepMind], Research Intern -- London, UK
],
[
May 2021 Aug 2021
],
main-column-second-row: [
- Developed reinforcement learning algorithms for multi-agent coordination
- Published research at top-tier venues with significant academic impact
- ICML 2022 main conference paper, cited 340+ times within two years
- NeurIPS 2022 workshop paper on emergent communication protocols
- Invited journal extension in JMLR (2023)
],
)
#regular-entry(
[
#strong[Apple ML Research], Research Intern -- Cupertino, CA
],
[
May 2020 Aug 2020
],
main-column-second-row: [
- Created on-device neural network compression pipeline deployed across 50M+ devices
- Filed 2 patents on efficient model quantization techniques for edge inference
],
)
#regular-entry(
[
#strong[Microsoft Research], Research Intern -- Redmond, WA
],
[
May 2019 Aug 2019
],
main-column-second-row: [
- Implemented novel self-supervised learning framework for low-resource language modeling
- Research integrated into Azure Cognitive Services, reducing training data requirements by 60\%
],
)
== Projects
#regular-entry(
[
#strong[#link("https://github.com/")[FlashInfer]]
],
[
Jan 2023 present
],
main-column-second-row: [
#summary[Open-source library for high-performance LLM inference kernels]
- Achieved 2.8x speedup over baseline attention implementations on A100 GPUs
- Adopted by 3 major AI labs, 8,500+ GitHub stars, 200+ contributors
],
)
#regular-entry(
[
#strong[#link("https://github.com/")[NeuralPrune]]
],
[
Jan 2021
],
main-column-second-row: [
#summary[Automated neural network pruning toolkit with differentiable masks]
- Reduced model size by 90\% with less than 1\% accuracy degradation on ImageNet
- Featured in PyTorch ecosystem tools, 4,200+ GitHub stars
],
)
== Publications
#regular-entry(
[
#strong[Sparse Mixture-of-Experts at Scale: Efficient Routing for Trillion-Parameter Models]
],
[
July 2023
],
main-column-second-row: [
#emph[John Doe], Sarah Williams, David Park
#link("https://doi.org/10.1234/neurips.2023.1234")[10.1234\/neurips.2023.1234] (NeurIPS 2023)
],
)
#regular-entry(
[
#strong[Neural Architecture Search via Differentiable Pruning]
],
[
Dec 2022
],
main-column-second-row: [
James Liu, #emph[John Doe]
#link("https://doi.org/10.1234/neurips.2022.5678")[10.1234\/neurips.2022.5678] (NeurIPS 2022, Spotlight)
],
)
#regular-entry(
[
#strong[Multi-Agent Reinforcement Learning with Emergent Communication]
],
[
July 2022
],
main-column-second-row: [
Maria Garcia, #emph[John Doe], Tom Anderson
#link("https://doi.org/10.1234/icml.2022.9012")[10.1234\/icml.2022.9012] (ICML 2022)
],
)
#regular-entry(
[
#strong[On-Device Model Compression via Learned Quantization]
],
[
May 2021
],
main-column-second-row: [
#emph[John Doe], Kevin Wu
#link("https://doi.org/10.1234/iclr.2021.3456")[10.1234\/iclr.2021.3456] (ICLR 2021, Best Paper Award)
],
)
== Selected Honors
- MIT Technology Review 35 Under 35 Innovators (2024)
- Forbes 30 Under 30 in Enterprise Technology (2024)
- ACM Doctoral Dissertation Award Honorable Mention (2023)
- Google PhD Fellowship in Machine Learning (2020 2023)
- Fulbright Scholarship for Graduate Studies (2018)
== Skills
#strong[Languages:] Python, C++, CUDA, Rust, Julia
#strong[ML Frameworks:] PyTorch, JAX, TensorFlow, Triton, ONNX
#strong[Infrastructure:] Kubernetes, Ray, distributed training, AWS, GCP
#strong[Research Areas:] Neural architecture search, model compression, efficient inference, multi-agent RL
== Patents
+ Adaptive Quantization for Neural Network Inference on Edge Devices (US Patent 11,234,567)
+ Dynamic Sparsity Patterns for Efficient Transformer Attention (US Patent 11,345,678)
+ Hardware-Aware Neural Architecture Search Method (US Patent 11,456,789)
== Invited Talks
#reversed-numbered-entries(
[
+ Scaling Laws for Efficient Inference Stanford HAI Symposium (2024)
+ Building AI Infrastructure for the Next Decade TechCrunch Disrupt (2024)
+ From Research to Production: Lessons in ML Systems NeurIPS Workshop (2023)
+ Efficient Deep Learning: A Practitioner's Perspective Google Tech Talk (2022)
],
)

View File

@@ -4,9 +4,11 @@
// Apply the rendercv template with custom configuration
#show: rendercv.with(
name: "John Doe",
title: "John Doe - CV",
footer: context { [#emph[John Doe -- #str(here().page())\/#str(counter(page).final().first())]] },
top-note: [ #emph[Last updated in Dec 2025] ],
top-note: [ #emph[Last updated in Mar 2026] ],
locale-catalog-language: "en",
text-direction: ltr,
page-size: "us-letter",
page-top-margin: 0.7in,
page-bottom-margin: 0.7in,
@@ -67,6 +69,7 @@
entries-space-between-columns: 0.3cm,
entries-allow-page-break: false,
entries-short-second-row: false,
entries-degree-width: 1cm,
entries-summary-space-left: 0cm,
entries-summary-space-above: 0.1cm,
entries-highlights-bullet: "•" ,
@@ -76,9 +79,9 @@
entries-highlights-space-between-items: 0.1cm,
entries-highlights-space-between-bullet-and-text: 0.3em,
date: datetime(
year: 2025,
month: 12,
day: 5,
year: 2026,
month: 3,
day: 20,
),
)
@@ -98,43 +101,47 @@
RenderCV reads a CV written in a YAML file, and generates a PDF with professional typography.
See the #link("https://docs.rendercv.com")[documentation] for more details.
Each section title is arbitrary.
You can choose any of the 9 entry types for each section.
Markdown syntax is supported everywhere. This is #strong[bold], #emph[italic], and #link("https://example.com")[link].
== Education
#education-entry(
[
#strong[Princeton University], PhD in Computer Science -- Princeton, NJ
],
[
Sept 2018 May 2023
],
main-column-second-row: [
- Thesis: Efficient Neural Architecture Search for Resource-Constrained Deployment
- Advisor: Prof. Sanjeev Arora
- NSF Graduate Research Fellowship, Siebel Scholar (Class of 2022)
],
)
#education-entry(
[
#strong[Boğaziçi University], BS in Computer Engineering -- Istanbul, Türkiye
],
[
Sept 2014 June 2018
],
main-column-second-row: [
- GPA: 3.97\/4.00, Valedictorian
- Fulbright Scholarship recipient for graduate studies
- Fulbright Scholarship recipient for Graduate Studies
],
)
@@ -143,95 +150,95 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Co-Founder & CTO], Nexus AI -- San Francisco, CA
],
[
June 2023 present
],
main-column-second-row: [
- Built foundation model infrastructure serving 2M+ monthly API requests with 99.97\% uptime
- Raised \$18M Series A led by Sequoia Capital, with participation from a16z and Founders Fund
- Scaled engineering team from 3 to 28 across ML research, platform, and applied AI divisions
- Developed proprietary inference optimization reducing latency by 73\% compared to baseline
],
)
#regular-entry(
[
#strong[Research Intern], NVIDIA Research -- Santa Clara, CA
],
[
May 2022 Aug 2022
],
main-column-second-row: [
- Designed sparse attention mechanism reducing transformer memory footprint by 4.2x
- Co-authored paper accepted at NeurIPS 2022 (spotlight presentation, top 5\% of submissions)
],
)
#regular-entry(
[
#strong[Research Intern], Google DeepMind -- London, UK
],
[
May 2021 Aug 2021
],
main-column-second-row: [
- Developed reinforcement learning algorithms for multi-agent coordination
- Published research at top-tier venues with significant academic impact
- ICML 2022 main conference paper, cited 340+ times within two years
- NeurIPS 2022 workshop paper on emergent communication protocols
- Invited journal extension in JMLR (2023)
],
)
#regular-entry(
[
#strong[Research Intern], Apple ML Research -- Cupertino, CA
],
[
May 2020 Aug 2020
],
main-column-second-row: [
- Created on-device neural network compression pipeline deployed across 50M+ devices
- Filed 2 patents on efficient model quantization techniques for edge inference
],
)
#regular-entry(
[
#strong[Research Intern], Microsoft Research -- Redmond, WA
],
[
May 2019 Aug 2019
],
main-column-second-row: [
- Implemented novel self-supervised learning framework for low-resource language modeling
- Research integrated into Azure Cognitive Services, reducing training data requirements by 60\%
],
)
@@ -240,38 +247,38 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[#link("https://github.com/")[FlashInfer]]
],
[
Jan 2023 present
],
main-column-second-row: [
#summary[Open-source library for high-performance LLM inference kernels]
- Achieved 2.8x speedup over baseline attention implementations on A100 GPUs
- Adopted by 3 major AI labs, 8,500+ GitHub stars, 200+ contributors
],
)
#regular-entry(
[
#strong[#link("https://github.com/")[NeuralPrune]]
],
[
Jan 2021
],
main-column-second-row: [
#summary[Automated neural network pruning toolkit with differentiable masks]
- Reduced model size by 90\% with less than 1\% accuracy degradation on ImageNet
- Featured in PyTorch ecosystem tools, 4,200+ GitHub stars
],
)
@@ -280,68 +287,68 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Sparse Mixture-of-Experts at Scale: Efficient Routing for Trillion-Parameter Models]
],
[
July 2023
],
main-column-second-row: [
#emph[John Doe], Sarah Williams, David Park
#link("https://doi.org/10.1234/neurips.2023.1234")[10.1234\/neurips.2023.1234] (NeurIPS 2023)
],
)
#regular-entry(
[
#strong[Neural Architecture Search via Differentiable Pruning]
],
[
Dec 2022
],
main-column-second-row: [
James Liu, #emph[John Doe]
#link("https://doi.org/10.1234/neurips.2022.5678")[10.1234\/neurips.2022.5678] (NeurIPS 2022, Spotlight)
],
)
#regular-entry(
[
#strong[Multi-Agent Reinforcement Learning with Emergent Communication]
],
[
July 2022
],
main-column-second-row: [
Maria Garcia, #emph[John Doe], Tom Anderson
#link("https://doi.org/10.1234/icml.2022.9012")[10.1234\/icml.2022.9012] (ICML 2022)
],
)
#regular-entry(
[
#strong[On-Device Model Compression via Learned Quantization]
],
[
May 2021
],
main-column-second-row: [
#emph[John Doe], Kevin Wu
#link("https://doi.org/10.1234/iclr.2021.3456")[10.1234\/iclr.2021.3456] (ICLR 2021, Best Paper Award)
],
)
@@ -389,15 +396,3 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
+ Efficient Deep Learning: A Practitioner's Perspective Google Tech Talk (2022)
],
)
== Any Section Title
You can use any section title you want.
You can choose any entry type for the section: `TextEntry`, `ExperienceEntry`, `EducationEntry`, `PublicationEntry`, `BulletEntry`, `NumberedEntry`, or `ReversedNumberedEntry`.
Markdown syntax is supported everywhere.
The `design` field in YAML gives you control over almost any aspect of your CV design.
See the #link("https://docs.rendercv.com")[documentation] for more details.

View File

@@ -4,9 +4,11 @@
// Apply the rendercv template with custom configuration
#show: rendercv.with(
name: "John Doe",
title: "John Doe - CV",
footer: context { [#emph[John Doe -- #str(here().page())\/#str(counter(page).final().first())]] },
top-note: [ #emph[Last updated in Dec 2025] ],
top-note: [ #emph[Last updated in Mar 2026] ],
locale-catalog-language: "en",
text-direction: ltr,
page-size: "us-letter",
page-top-margin: 0.7in,
page-bottom-margin: 0.7in,
@@ -67,6 +69,7 @@
entries-space-between-columns: 0.1cm,
entries-allow-page-break: false,
entries-short-second-row: false,
entries-degree-width: 1cm,
entries-summary-space-left: 0cm,
entries-summary-space-above: 0cm,
entries-highlights-bullet: "◦" ,
@@ -76,9 +79,9 @@
entries-highlights-space-between-items: 0cm,
entries-highlights-space-between-bullet-and-text: 0.5em,
date: datetime(
year: 2025,
month: 12,
day: 5,
year: 2026,
month: 3,
day: 20,
),
)
@@ -98,51 +101,55 @@
RenderCV reads a CV written in a YAML file, and generates a PDF with professional typography.
See the #link("https://docs.rendercv.com")[documentation] for more details.
Each section title is arbitrary.
You can choose any of the 9 entry types for each section.
Markdown syntax is supported everywhere. This is #strong[bold], #emph[italic], and #link("https://example.com")[link].
== Education
#education-entry(
[
#strong[Princeton University]
#emph[PhD] #emph[in] #emph[Computer Science]
],
[
#emph[Princeton, NJ]
#emph[Sept 2018 May 2023]
],
main-column-second-row: [
- Thesis: Efficient Neural Architecture Search for Resource-Constrained Deployment
- Advisor: Prof. Sanjeev Arora
- NSF Graduate Research Fellowship, Siebel Scholar (Class of 2022)
],
)
#education-entry(
[
#strong[Boğaziçi University]
#emph[BS] #emph[in] #emph[Computer Engineering]
],
[
#emph[Istanbul, Türkiye]
#emph[Sept 2014 June 2018]
],
main-column-second-row: [
- GPA: 3.97\/4.00, Valedictorian
- Fulbright Scholarship recipient for graduate studies
- Fulbright Scholarship recipient for Graduate Studies
],
)
@@ -151,115 +158,115 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Co-Founder & CTO]
#emph[Nexus AI]
],
[
#emph[San Francisco, CA]
#emph[June 2023 present]
],
main-column-second-row: [
- Built foundation model infrastructure serving 2M+ monthly API requests with 99.97\% uptime
- Raised \$18M Series A led by Sequoia Capital, with participation from a16z and Founders Fund
- Scaled engineering team from 3 to 28 across ML research, platform, and applied AI divisions
- Developed proprietary inference optimization reducing latency by 73\% compared to baseline
],
)
#regular-entry(
[
#strong[Research Intern]
#emph[NVIDIA Research]
],
[
#emph[Santa Clara, CA]
#emph[May 2022 Aug 2022]
],
main-column-second-row: [
- Designed sparse attention mechanism reducing transformer memory footprint by 4.2x
- Co-authored paper accepted at NeurIPS 2022 (spotlight presentation, top 5\% of submissions)
],
)
#regular-entry(
[
#strong[Research Intern]
#emph[Google DeepMind]
],
[
#emph[London, UK]
#emph[May 2021 Aug 2021]
],
main-column-second-row: [
- Developed reinforcement learning algorithms for multi-agent coordination
- Published research at top-tier venues with significant academic impact
- ICML 2022 main conference paper, cited 340+ times within two years
- NeurIPS 2022 workshop paper on emergent communication protocols
- Invited journal extension in JMLR (2023)
],
)
#regular-entry(
[
#strong[Research Intern]
#emph[Apple ML Research]
],
[
#emph[Cupertino, CA]
#emph[May 2020 Aug 2020]
],
main-column-second-row: [
- Created on-device neural network compression pipeline deployed across 50M+ devices
- Filed 2 patents on efficient model quantization techniques for edge inference
],
)
#regular-entry(
[
#strong[Research Intern]
#emph[Microsoft Research]
],
[
#emph[Redmond, WA]
#emph[May 2019 Aug 2019]
],
main-column-second-row: [
- Implemented novel self-supervised learning framework for low-resource language modeling
- Research integrated into Azure Cognitive Services, reducing training data requirements by 60\%
],
)
@@ -268,38 +275,38 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[#link("https://github.com/")[FlashInfer]]
],
[
#emph[Jan 2023 present]
],
main-column-second-row: [
#summary[Open-source library for high-performance LLM inference kernels]
- Achieved 2.8x speedup over baseline attention implementations on A100 GPUs
- Adopted by 3 major AI labs, 8,500+ GitHub stars, 200+ contributors
],
)
#regular-entry(
[
#strong[#link("https://github.com/")[NeuralPrune]]
],
[
#emph[Jan 2021]
],
main-column-second-row: [
#summary[Automated neural network pruning toolkit with differentiable masks]
- Reduced model size by 90\% with less than 1\% accuracy degradation on ImageNet
- Featured in PyTorch ecosystem tools, 4,200+ GitHub stars
],
)
@@ -308,68 +315,68 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
#regular-entry(
[
#strong[Sparse Mixture-of-Experts at Scale: Efficient Routing for Trillion-Parameter Models]
],
[
July 2023
],
main-column-second-row: [
#emph[John Doe], Sarah Williams, David Park
#link("https://doi.org/10.1234/neurips.2023.1234")[10.1234\/neurips.2023.1234] (NeurIPS 2023)
],
)
#regular-entry(
[
#strong[Neural Architecture Search via Differentiable Pruning]
],
[
Dec 2022
],
main-column-second-row: [
James Liu, #emph[John Doe]
#link("https://doi.org/10.1234/neurips.2022.5678")[10.1234\/neurips.2022.5678] (NeurIPS 2022, Spotlight)
],
)
#regular-entry(
[
#strong[Multi-Agent Reinforcement Learning with Emergent Communication]
],
[
July 2022
],
main-column-second-row: [
Maria Garcia, #emph[John Doe], Tom Anderson
#link("https://doi.org/10.1234/icml.2022.9012")[10.1234\/icml.2022.9012] (ICML 2022)
],
)
#regular-entry(
[
#strong[On-Device Model Compression via Learned Quantization]
],
[
May 2021
],
main-column-second-row: [
#emph[John Doe], Kevin Wu
#link("https://doi.org/10.1234/iclr.2021.3456")[10.1234\/iclr.2021.3456] (ICLR 2021, Best Paper Award)
],
)
@@ -417,15 +424,3 @@ See the #link("https://docs.rendercv.com")[documentation] for more details.
+ Efficient Deep Learning: A Practitioner's Perspective Google Tech Talk (2022)
],
)
== Any Section Title
You can use any section title you want.
You can choose any entry type for the section: `TextEntry`, `ExperienceEntry`, `EducationEntry`, `PublicationEntry`, `BulletEntry`, `NumberedEntry`, or `ReversedNumberedEntry`.
Markdown syntax is supported everywhere.
The `design` field in YAML gives you control over almost any aspect of your CV design.
See the #link("https://docs.rendercv.com")[documentation] for more details.

View File

@@ -10,7 +10,7 @@ from rendercv.schema.sample_generator import create_sample_yaml_input_file
repository_root = pathlib.Path(__file__).parent.parent
rendercv_path = repository_root / "rendercv"
image_assets_directory = repository_root / "docs" / "assets" / "images" / "examples"
rendercv_typst_examples_directory = repository_root / "rendercv-typst" / "examples"
examples_directory_path = pathlib.Path(__file__).parent.parent / "examples"
@@ -47,5 +47,11 @@ for theme in available_themes:
image_assets_directory / f"{theme}.png",
)
rendercv_typst_examples_directory.mkdir(parents=True, exist_ok=True)
shutil.copy(
temp_directory_path / f"{yaml_file_path.stem}.typ",
rendercv_typst_examples_directory / f"{theme}.typ",
)
print("Examples generated successfully.") # NOQA: T201

View File

@@ -1,6 +1,8 @@
import atexit
import functools
import pathlib
import shutil
import tempfile
import rendercv_fonts
import typst
@@ -110,6 +112,51 @@ def copy_photo_next_to_typst_file(
shutil.copy(photo_path, copy_to)
@functools.lru_cache(maxsize=1)
def get_local_package_path() -> pathlib.Path | None:
"""Set up local Typst package resolution for development.
Why:
During development, the rendercv-typst package version referenced in
templates may not be published to the Typst registry yet. This detects
if the rendercv-typst/ directory exists in the repository and creates a
temporary package cache so the Typst compiler resolves the import
locally. In production (installed via pip), rendercv-typst/ won't exist
and the compiler falls back to the Typst registry.
Returns:
Path to temporary package cache directory, or None if not in development.
"""
repository_root = pathlib.Path(__file__).parent.parent.parent.parent
rendercv_typst_directory = repository_root / "rendercv-typst"
typst_toml_path = rendercv_typst_directory / "typst.toml"
if not typst_toml_path.is_file():
return None
version = None
for line in typst_toml_path.read_text(encoding="utf-8").splitlines():
stripped = line.strip()
if stripped.startswith("version"):
version = stripped.split("=", 1)[1].strip().strip('"')
break
if version is None:
return None
temp_dir = pathlib.Path(tempfile.mkdtemp(prefix="rendercv-pkg-"))
atexit.register(shutil.rmtree, str(temp_dir), True)
package_directory = temp_dir / "preview" / "rendercv" / version
shutil.copytree(
rendercv_typst_directory,
package_directory,
ignore=shutil.ignore_patterns(".git*", "CHANGELOG.md", "*.pdf"),
)
return temp_dir
@functools.lru_cache(maxsize=1)
def get_typst_compiler(
input_file_path: pathlib.Path | None,
@@ -141,4 +188,5 @@ def get_typst_compiler(
else pathlib.Path.cwd() / "fonts"
),
],
package_path=get_local_package_path(),
)