mirror of
https://github.com/mountain-loop/yaak.git
synced 2026-02-02 02:32:07 -05:00
Compare commits
2 Commits
actions-sy
...
v2025.9.3
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9e6523f477 | ||
|
|
0c8d180928 |
@@ -1,72 +0,0 @@
|
||||
# Claude Context: Detaching Tauri from Yaak
|
||||
|
||||
## Goal
|
||||
Make Yaak runnable as a standalone CLI without Tauri as a dependency. The core Rust crates in `crates/` should be usable independently, while Tauri-specific code lives in `crates-tauri/`.
|
||||
|
||||
## Project Structure
|
||||
```
|
||||
crates/ # Core crates - should NOT depend on Tauri
|
||||
crates-tauri/ # Tauri-specific crates (yaak-app, yaak-tauri-utils, etc.)
|
||||
crates-cli/ # CLI crate (yaak-cli)
|
||||
```
|
||||
|
||||
## Completed Work
|
||||
|
||||
### 1. Folder Restructure
|
||||
- Moved Tauri-dependent app code to `crates-tauri/yaak-app/`
|
||||
- Created `crates-tauri/yaak-tauri-utils/` for shared Tauri utilities (window traits, api_client, error handling)
|
||||
- Created `crates-cli/yaak-cli/` for the standalone CLI
|
||||
|
||||
### 2. Decoupled Crates (no longer depend on Tauri)
|
||||
- **yaak-models**: Uses `init_standalone()` pattern for CLI database access
|
||||
- **yaak-http**: Removed Tauri plugin, HttpConnectionManager initialized in yaak-app setup
|
||||
- **yaak-common**: Only contains Tauri-free utilities (serde, platform)
|
||||
- **yaak-crypto**: Removed Tauri plugin, EncryptionManager initialized in yaak-app setup, commands moved to yaak-app
|
||||
- **yaak-grpc**: Replaced AppHandle with GrpcConfig struct, uses tokio::process::Command instead of Tauri sidecar
|
||||
|
||||
### 3. CLI Implementation
|
||||
- Basic CLI at `crates-cli/yaak-cli/src/main.rs`
|
||||
- Commands: workspaces, requests, send (by ID), get (ad-hoc URL), create
|
||||
- Uses same database as Tauri app via `yaak_models::init_standalone()`
|
||||
|
||||
## Remaining Work
|
||||
|
||||
### Crates Still Depending on Tauri (in `crates/`)
|
||||
1. **yaak-git** (3 files) - Moderate complexity
|
||||
2. **yaak-plugins** (13 files) - **Hardest** - deeply integrated with Tauri for plugin-to-window communication
|
||||
3. **yaak-sync** (4 files) - Moderate complexity
|
||||
4. **yaak-ws** (5 files) - Moderate complexity
|
||||
|
||||
### Pattern for Decoupling
|
||||
1. Remove Tauri plugin `init()` function from the crate
|
||||
2. Move commands to `yaak-app/src/commands.rs` or keep inline in `lib.rs`
|
||||
3. Move extension traits (e.g., `SomethingManagerExt`) to yaak-app or yaak-tauri-utils
|
||||
4. Initialize managers in yaak-app's `.setup()` block
|
||||
5. Remove `tauri` from Cargo.toml dependencies
|
||||
6. Update `crates-tauri/yaak-app/capabilities/default.json` to remove the plugin permission
|
||||
7. Replace `tauri::async_runtime::block_on` with `tokio::runtime::Handle::current().block_on()`
|
||||
|
||||
## Key Files
|
||||
- `crates-tauri/yaak-app/src/lib.rs` - Main Tauri app, setup block initializes managers
|
||||
- `crates-tauri/yaak-app/src/commands.rs` - Migrated Tauri commands
|
||||
- `crates-tauri/yaak-app/src/models_ext.rs` - Database plugin and extension traits
|
||||
- `crates-tauri/yaak-tauri-utils/src/window.rs` - WorkspaceWindowTrait for window state
|
||||
- `crates/yaak-models/src/lib.rs` - Contains `init_standalone()` for CLI usage
|
||||
|
||||
## Git Branch
|
||||
Working on `detach-tauri` branch.
|
||||
|
||||
## Recent Commits
|
||||
```
|
||||
c40cff40 Remove Tauri dependencies from yaak-crypto and yaak-grpc
|
||||
df495f1d Move Tauri utilities from yaak-common to yaak-tauri-utils
|
||||
481e0273 Remove Tauri dependencies from yaak-http and yaak-common
|
||||
10568ac3 Add HTTP request sending to yaak-cli
|
||||
bcb7d600 Add yaak-cli stub with basic database access
|
||||
e718a5f1 Refactor models_ext to use init_standalone from yaak-models
|
||||
```
|
||||
|
||||
## Testing
|
||||
- Run `cargo check -p <crate>` to verify a crate builds without Tauri
|
||||
- Run `npm run app-dev` to test the Tauri app still works
|
||||
- Run `cargo run -p yaak-cli -- --help` to test the CLI
|
||||
@@ -1,51 +0,0 @@
|
||||
---
|
||||
description: Review a PR in a new worktree
|
||||
allowed-tools: Bash(git worktree:*), Bash(gh pr:*)
|
||||
---
|
||||
|
||||
Review a GitHub pull request in a new git worktree.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/review-pr <PR_NUMBER>
|
||||
```
|
||||
|
||||
## What to do
|
||||
|
||||
1. List all open pull requests and ask the user to select one
|
||||
2. Get PR information using `gh pr view <PR_NUMBER> --json number,headRefName`
|
||||
3. Extract the branch name from the PR
|
||||
4. Create a new worktree at `../yaak-worktrees/pr-<PR_NUMBER>` using `git worktree add` with a timeout of at least 300000ms (5 minutes) since the post-checkout hook runs a bootstrap script
|
||||
5. Checkout the PR branch in the new worktree using `gh pr checkout <PR_NUMBER>`
|
||||
6. The post-checkout hook will automatically:
|
||||
- Create `.env.local` with unique ports
|
||||
- Copy editor config folders
|
||||
- Run `npm install && npm run bootstrap`
|
||||
7. Inform the user:
|
||||
- Where the worktree was created
|
||||
- What ports were assigned
|
||||
- How to access it (cd command)
|
||||
- How to run the dev server
|
||||
- How to remove the worktree when done
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
Created worktree for PR #123 at ../yaak-worktrees/pr-123
|
||||
Branch: feature-auth
|
||||
Ports: Vite (1421), MCP (64344)
|
||||
|
||||
To start working:
|
||||
cd ../yaak-worktrees/pr-123
|
||||
npm run app-dev
|
||||
|
||||
To remove when done:
|
||||
git worktree remove ../yaak-worktrees/pr-123
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- If the PR doesn't exist, show a helpful error
|
||||
- If the worktree already exists, inform the user and ask if they want to remove and recreate it
|
||||
- If `gh` CLI is not available, inform the user to install it
|
||||
@@ -1,47 +0,0 @@
|
||||
---
|
||||
description: Generate formatted release notes for Yaak releases
|
||||
allowed-tools: Bash(git tag:*)
|
||||
---
|
||||
|
||||
Generate formatted release notes for Yaak releases by analyzing git history and pull request descriptions.
|
||||
|
||||
## What to do
|
||||
|
||||
1. Identifies the version tag and previous version
|
||||
2. Retrieves all commits between versions
|
||||
- If the version is a beta version, it retrieves commits between the beta version and previous beta version
|
||||
- If the version is a stable version, it retrieves commits between the stable version and the previous stable version
|
||||
3. Fetches PR descriptions for linked issues to find:
|
||||
- Feedback URLs (feedback.yaak.app)
|
||||
- Additional context and descriptions
|
||||
- Installation links for plugins
|
||||
4. Formats the release notes using the standard Yaak format:
|
||||
- Changelog badge at the top
|
||||
- Bulleted list of changes with PR links
|
||||
- Feedback links where available
|
||||
- Full changelog comparison link at the bottom
|
||||
|
||||
## Output Format
|
||||
|
||||
The skill generates markdown-formatted release notes following this structure:
|
||||
|
||||
```markdown
|
||||
[](https://yaak.app/changelog/VERSION)
|
||||
|
||||
- Feature/fix description in by @username [#123](https://github.com/mountain-loop/yaak/pull/123)
|
||||
- [Linked feedback item](https://feedback.yaak.app/p/item) by @username in [#456](https://github.com/mountain-loop/yaak/pull/456)
|
||||
- A simple item that doesn't have a feedback or PR link
|
||||
|
||||
**Full Changelog**: https://github.com/mountain-loop/yaak/compare/vPREV...vCURRENT
|
||||
```
|
||||
|
||||
**IMPORTANT**: Always add a blank lines around the markdown code fence and output the markdown code block last
|
||||
**IMPORTANT**: PRs by `@gschier` should not mention the @username
|
||||
|
||||
## After Generating Release Notes
|
||||
|
||||
After outputting the release notes, ask the user if they would like to create a draft GitHub release with these notes. If they confirm, create the release using:
|
||||
|
||||
```bash
|
||||
gh release create <tag> --draft --prerelease --title "<tag>" --notes '<release notes>'
|
||||
```
|
||||
@@ -1,27 +0,0 @@
|
||||
# Project Rules
|
||||
|
||||
## General Development
|
||||
|
||||
- **NEVER** commit or push without explicit confirmation
|
||||
|
||||
## Build and Lint
|
||||
|
||||
- **ALWAYS** run `npm run lint` after modifying TypeScript or JavaScript files
|
||||
- Run `npm run bootstrap` after changing plugin runtime or MCP server code
|
||||
|
||||
## Plugin System
|
||||
|
||||
### Backend Constraints
|
||||
|
||||
- Always use `UpdateSource::Plugin` when calling database methods from plugin events
|
||||
- Never send timestamps (`createdAt`, `updatedAt`) from TypeScript - Rust backend controls these
|
||||
- Backend uses `NaiveDateTime` (no timezone) so avoid sending ISO timestamp strings
|
||||
|
||||
### MCP Server
|
||||
|
||||
- MCP server has **no active window context** - cannot call `window.workspaceId()`
|
||||
- Get workspace ID from `workspaceCtx.yaak.workspace.list()` instead
|
||||
|
||||
## Rust Type Generation
|
||||
|
||||
- Run `cargo test --package yaak-plugins` (and for other crates) to regenerate TypeScript bindings after modifying Rust event types
|
||||
@@ -1,35 +0,0 @@
|
||||
# Worktree Management Skill
|
||||
|
||||
## Creating Worktrees
|
||||
|
||||
When creating git worktrees for this project, ALWAYS use the path format:
|
||||
```
|
||||
../yaak-worktrees/<NAME>
|
||||
```
|
||||
|
||||
For example:
|
||||
- `git worktree add ../yaak-worktrees/feature-auth`
|
||||
- `git worktree add ../yaak-worktrees/bugfix-login`
|
||||
- `git worktree add ../yaak-worktrees/refactor-api`
|
||||
|
||||
## What Happens Automatically
|
||||
|
||||
The post-checkout hook will automatically:
|
||||
1. Create `.env.local` with unique ports (YAAK_DEV_PORT and YAAK_PLUGIN_MCP_SERVER_PORT)
|
||||
2. Copy gitignored editor config folders (.zed, .idea, etc.)
|
||||
3. Run `npm install && npm run bootstrap`
|
||||
|
||||
## Deleting Worktrees
|
||||
|
||||
```bash
|
||||
git worktree remove ../yaak-worktrees/<NAME>
|
||||
```
|
||||
|
||||
## Port Assignments
|
||||
|
||||
- Main worktree: 1420 (Vite), 64343 (MCP)
|
||||
- First worktree: 1421, 64344
|
||||
- Second worktree: 1422, 64345
|
||||
- etc.
|
||||
|
||||
Each worktree can run `npm run app-dev` simultaneously without conflicts.
|
||||
9
.gitattributes
vendored
9
.gitattributes
vendored
@@ -1,7 +1,2 @@
|
||||
crates-tauri/yaak-app/vendored/**/* linguist-generated=true
|
||||
crates-tauri/yaak-app/gen/schemas/**/* linguist-generated=true
|
||||
**/bindings/* linguist-generated=true
|
||||
crates/yaak-templates/pkg/* linguist-generated=true
|
||||
|
||||
# Ensure consistent line endings for test files that check exact content
|
||||
crates/yaak-http/tests/test.txt text eol=lf
|
||||
src-tauri/vendored/**/* linguist-generated=true
|
||||
src-tauri/gen/schemas/**/* linguist-generated=true
|
||||
|
||||
3
.github/workflows/ci.yml
vendored
3
.github/workflows/ci.yml
vendored
@@ -18,13 +18,14 @@ jobs:
|
||||
- uses: dtolnay/rust-toolchain@stable
|
||||
- uses: Swatinem/rust-cache@v2
|
||||
with:
|
||||
workspaces: 'src-tauri'
|
||||
shared-key: ci
|
||||
cache-on-failure: true
|
||||
|
||||
- run: npm ci
|
||||
- run: npm run bootstrap
|
||||
- run: npm run lint
|
||||
- name: Run JS Tests
|
||||
run: npm test
|
||||
- name: Run Rust Tests
|
||||
run: cargo test --all
|
||||
working-directory: src-tauri
|
||||
|
||||
50
.github/workflows/claude.yml
vendored
50
.github/workflows/claude.yml
vendored
@@ -1,50 +0,0 @@
|
||||
name: Claude Code
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
actions: read # Required for Claude to read CI results on PRs
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
|
||||
# This is an optional setting that allows Claude to read CI results on PRs
|
||||
additional_permissions: |
|
||||
actions: read
|
||||
|
||||
# Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
|
||||
# prompt: 'Update the pull request description to include a summary of changes.'
|
||||
|
||||
# Optional: Add claude_args to customize behavior and configuration
|
||||
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
|
||||
# or https://code.claude.com/docs/en/cli-reference for available options
|
||||
# claude_args: '--allowed-tools Bash(gh pr:*)'
|
||||
|
||||
98
.github/workflows/release.yml
vendored
98
.github/workflows/release.yml
vendored
@@ -1,7 +1,7 @@
|
||||
name: Generate Artifacts
|
||||
on:
|
||||
push:
|
||||
tags: [v*]
|
||||
tags: [ v* ]
|
||||
|
||||
jobs:
|
||||
build-artifacts:
|
||||
@@ -13,37 +13,37 @@ jobs:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- platform: "macos-latest" # for Arm-based Macs (M1 and above).
|
||||
args: "--target aarch64-apple-darwin"
|
||||
yaak_arch: "arm64"
|
||||
os: "macos"
|
||||
targets: "aarch64-apple-darwin"
|
||||
- platform: "macos-latest" # for Intel-based Macs.
|
||||
args: "--target x86_64-apple-darwin"
|
||||
yaak_arch: "x64"
|
||||
os: "macos"
|
||||
targets: "x86_64-apple-darwin"
|
||||
- platform: "ubuntu-22.04"
|
||||
args: ""
|
||||
yaak_arch: "x64"
|
||||
os: "ubuntu"
|
||||
targets: ""
|
||||
- platform: "ubuntu-22.04-arm"
|
||||
args: ""
|
||||
yaak_arch: "arm64"
|
||||
os: "ubuntu"
|
||||
targets: ""
|
||||
- platform: "windows-latest"
|
||||
args: ""
|
||||
yaak_arch: "x64"
|
||||
os: "windows"
|
||||
targets: ""
|
||||
- platform: 'macos-latest' # for Arm-based Macs (M1 and above).
|
||||
args: '--target aarch64-apple-darwin'
|
||||
yaak_arch: 'arm64'
|
||||
os: 'macos'
|
||||
targets: 'aarch64-apple-darwin'
|
||||
- platform: 'macos-latest' # for Intel-based Macs.
|
||||
args: '--target x86_64-apple-darwin'
|
||||
yaak_arch: 'x64'
|
||||
os: 'macos'
|
||||
targets: 'x86_64-apple-darwin'
|
||||
- platform: 'ubuntu-22.04'
|
||||
args: ''
|
||||
yaak_arch: 'x64'
|
||||
os: 'ubuntu'
|
||||
targets: ''
|
||||
- platform: 'ubuntu-22.04-arm'
|
||||
args: ''
|
||||
yaak_arch: 'arm64'
|
||||
os: 'ubuntu'
|
||||
targets: ''
|
||||
- platform: 'windows-latest'
|
||||
args: ''
|
||||
yaak_arch: 'x64'
|
||||
os: 'windows'
|
||||
targets: ''
|
||||
# Windows ARM64
|
||||
- platform: "windows-latest"
|
||||
args: "--target aarch64-pc-windows-msvc"
|
||||
yaak_arch: "arm64"
|
||||
os: "windows"
|
||||
targets: "aarch64-pc-windows-msvc"
|
||||
- platform: 'windows-latest'
|
||||
args: '--target aarch64-pc-windows-msvc'
|
||||
yaak_arch: 'arm64'
|
||||
os: 'windows'
|
||||
targets: 'aarch64-pc-windows-msvc'
|
||||
runs-on: ${{ matrix.platform }}
|
||||
timeout-minutes: 40
|
||||
steps:
|
||||
@@ -60,6 +60,7 @@ jobs:
|
||||
|
||||
- uses: Swatinem/rust-cache@v2
|
||||
with:
|
||||
workspaces: 'src-tauri'
|
||||
shared-key: ci
|
||||
cache-on-failure: true
|
||||
|
||||
@@ -88,43 +89,18 @@ jobs:
|
||||
& $exe --version
|
||||
|
||||
- run: npm ci
|
||||
- run: npm run bootstrap
|
||||
env:
|
||||
YAAK_TARGET_ARCH: ${{ matrix.yaak_arch }}
|
||||
- run: npm run lint
|
||||
- name: Run JS Tests
|
||||
run: npm test
|
||||
- name: Run Rust Tests
|
||||
run: cargo test --all
|
||||
working-directory: src-tauri
|
||||
|
||||
- name: Set version
|
||||
run: npm run replace-version
|
||||
env:
|
||||
YAAK_VERSION: ${{ github.ref_name }}
|
||||
|
||||
- name: Sign vendored binaries (macOS only)
|
||||
if: matrix.os == 'macos'
|
||||
env:
|
||||
APPLE_CERTIFICATE: ${{ secrets.APPLE_CERTIFICATE }}
|
||||
APPLE_CERTIFICATE_PASSWORD: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}
|
||||
APPLE_SIGNING_IDENTITY: ${{ secrets.APPLE_SIGNING_IDENTITY }}
|
||||
KEYCHAIN_PASSWORD: ${{ secrets.KEYCHAIN_PASSWORD }}
|
||||
run: |
|
||||
# Create keychain
|
||||
KEYCHAIN_PATH=$RUNNER_TEMP/app-signing.keychain-db
|
||||
security create-keychain -p "$KEYCHAIN_PASSWORD" $KEYCHAIN_PATH
|
||||
security set-keychain-settings -lut 21600 $KEYCHAIN_PATH
|
||||
security unlock-keychain -p "$KEYCHAIN_PASSWORD" $KEYCHAIN_PATH
|
||||
|
||||
# Import certificate
|
||||
echo "$APPLE_CERTIFICATE" | base64 --decode > certificate.p12
|
||||
security import certificate.p12 -P "$APPLE_CERTIFICATE_PASSWORD" -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
|
||||
security list-keychain -d user -s $KEYCHAIN_PATH
|
||||
|
||||
# Sign vendored binaries with hardened runtime and their specific entitlements
|
||||
codesign --force --options runtime --entitlements crates-tauri/yaak-app/macos/entitlements.yaakprotoc.plist --sign "$APPLE_SIGNING_IDENTITY" crates-tauri/yaak-app/vendored/protoc/yaakprotoc || true
|
||||
codesign --force --options runtime --entitlements crates-tauri/yaak-app/macos/entitlements.yaaknode.plist --sign "$APPLE_SIGNING_IDENTITY" crates-tauri/yaak-app/vendored/node/yaaknode || true
|
||||
|
||||
- uses: tauri-apps/tauri-action@v0
|
||||
env:
|
||||
YAAK_TARGET_ARCH: ${{ matrix.yaak_arch }}
|
||||
@@ -147,9 +123,9 @@ jobs:
|
||||
AZURE_CLIENT_SECRET: ${{ matrix.os == 'windows' && secrets.AZURE_CLIENT_SECRET }}
|
||||
AZURE_TENANT_ID: ${{ matrix.os == 'windows' && secrets.AZURE_TENANT_ID }}
|
||||
with:
|
||||
tagName: "v__VERSION__"
|
||||
releaseName: "Release __VERSION__"
|
||||
releaseBody: "[Changelog __VERSION__](https://yaak.app/blog/__VERSION__)"
|
||||
tagName: 'v__VERSION__'
|
||||
releaseName: 'Release __VERSION__'
|
||||
releaseBody: '[Changelog __VERSION__](https://yaak.app/blog/__VERSION__)'
|
||||
releaseDraft: true
|
||||
prerelease: true
|
||||
args: "${{ matrix.args }} --config ./crates-tauri/yaak-app/tauri.release.conf.json"
|
||||
args: '${{ matrix.args }} --config ./src-tauri/tauri.release.conf.json'
|
||||
|
||||
11
.gitignore
vendored
11
.gitignore
vendored
@@ -25,7 +25,6 @@ dist-ssr
|
||||
*.sln
|
||||
*.sw?
|
||||
.eslintcache
|
||||
out
|
||||
|
||||
*.sqlite
|
||||
*.sqlite-*
|
||||
@@ -34,13 +33,3 @@ out
|
||||
|
||||
.tmp
|
||||
tmp
|
||||
.zed
|
||||
codebook.toml
|
||||
target
|
||||
|
||||
# Per-worktree Tauri config (generated by post-checkout hook)
|
||||
crates-tauri/yaak-app/tauri.worktree.conf.json
|
||||
|
||||
# Tauri auto-generated permission files
|
||||
**/permissions/autogenerated
|
||||
**/permissions/schemas
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
node scripts/git-hooks/post-checkout.mjs "$@"
|
||||
73
Cargo.toml
73
Cargo.toml
@@ -1,73 +0,0 @@
|
||||
[workspace]
|
||||
resolver = "2"
|
||||
members = [
|
||||
# Shared crates (no Tauri dependency)
|
||||
"crates/yaak-actions",
|
||||
"crates/yaak-actions-builtin",
|
||||
"crates/yaak-core",
|
||||
"crates/yaak-common",
|
||||
"crates/yaak-crypto",
|
||||
"crates/yaak-git",
|
||||
"crates/yaak-grpc",
|
||||
"crates/yaak-http",
|
||||
"crates/yaak-models",
|
||||
"crates/yaak-plugins",
|
||||
"crates/yaak-sse",
|
||||
"crates/yaak-sync",
|
||||
"crates/yaak-templates",
|
||||
"crates/yaak-tls",
|
||||
"crates/yaak-ws",
|
||||
# CLI crates
|
||||
"crates-cli/yaak-cli",
|
||||
# Tauri-specific crates
|
||||
"crates-tauri/yaak-app",
|
||||
"crates-tauri/yaak-fonts",
|
||||
"crates-tauri/yaak-license",
|
||||
"crates-tauri/yaak-mac-window",
|
||||
"crates-tauri/yaak-tauri-utils",
|
||||
]
|
||||
|
||||
[workspace.dependencies]
|
||||
chrono = "0.4.42"
|
||||
hex = "0.4.3"
|
||||
keyring = "3.6.3"
|
||||
log = "0.4.29"
|
||||
reqwest = "0.12.20"
|
||||
rustls = { version = "0.23.34", default-features = false }
|
||||
rustls-platform-verifier = "0.6.2"
|
||||
serde = "1.0.228"
|
||||
serde_json = "1.0.145"
|
||||
sha2 = "0.10.9"
|
||||
tauri = "2.9.5"
|
||||
tauri-plugin = "2.5.2"
|
||||
tauri-plugin-dialog = "2.4.2"
|
||||
tauri-plugin-shell = "2.3.3"
|
||||
thiserror = "2.0.17"
|
||||
tokio = "1.48.0"
|
||||
ts-rs = "11.1.0"
|
||||
|
||||
# Internal crates - shared
|
||||
yaak-actions = { path = "crates/yaak-actions" }
|
||||
yaak-actions-builtin = { path = "crates/yaak-actions-builtin" }
|
||||
yaak-core = { path = "crates/yaak-core" }
|
||||
yaak-common = { path = "crates/yaak-common" }
|
||||
yaak-crypto = { path = "crates/yaak-crypto" }
|
||||
yaak-git = { path = "crates/yaak-git" }
|
||||
yaak-grpc = { path = "crates/yaak-grpc" }
|
||||
yaak-http = { path = "crates/yaak-http" }
|
||||
yaak-models = { path = "crates/yaak-models" }
|
||||
yaak-plugins = { path = "crates/yaak-plugins" }
|
||||
yaak-sse = { path = "crates/yaak-sse" }
|
||||
yaak-sync = { path = "crates/yaak-sync" }
|
||||
yaak-templates = { path = "crates/yaak-templates" }
|
||||
yaak-tls = { path = "crates/yaak-tls" }
|
||||
yaak-ws = { path = "crates/yaak-ws" }
|
||||
|
||||
# Internal crates - Tauri-specific
|
||||
yaak-fonts = { path = "crates-tauri/yaak-fonts" }
|
||||
yaak-license = { path = "crates-tauri/yaak-license" }
|
||||
yaak-mac-window = { path = "crates-tauri/yaak-mac-window" }
|
||||
yaak-tauri-utils = { path = "crates-tauri/yaak-tauri-utils" }
|
||||
|
||||
[profile.release]
|
||||
strip = false
|
||||
@@ -1,6 +1,6 @@
|
||||
<p align="center">
|
||||
<a href="https://github.com/JamesIves/github-sponsors-readme-action">
|
||||
<img width="200px" src="https://github.com/mountain-loop/yaak/raw/main/crates-tauri/yaak-app/icons/icon.png">
|
||||
<img width="200px" src="https://github.com/mountain-loop/yaak/raw/main/src-tauri/icons/icon.png">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
|
||||
|
||||
<p align="center">
|
||||
<!-- sponsors-premium --><a href="https://github.com/MVST-Solutions"><img src="https://github.com/MVST-Solutions.png" width="80px" alt="User avatar: MVST-Solutions" /></a> <a href="https://github.com/dharsanb"><img src="https://github.com/dharsanb.png" width="80px" alt="User avatar: dharsanb" /></a> <a href="https://github.com/railwayapp"><img src="https://github.com/railwayapp.png" width="80px" alt="User avatar: railwayapp" /></a> <a href="https://github.com/caseyamcl"><img src="https://github.com/caseyamcl.png" width="80px" alt="User avatar: caseyamcl" /></a> <a href="https://github.com/bytebase"><img src="https://github.com/bytebase.png" width="80px" alt="User avatar: bytebase" /></a> <a href="https://github.com/"><img src="https://raw.githubusercontent.com/JamesIves/github-sponsors-readme-action/dev/.github/assets/placeholder.png" width="80px" alt="User avatar: " /></a> <!-- sponsors-premium -->
|
||||
<!-- sponsors-premium --><a href="https://github.com/MVST-Solutions"><img src="https://github.com/MVST-Solutions.png" width="80px" alt="User avatar: MVST-Solutions" /></a> <a href="https://github.com/dharsanb"><img src="https://github.com/dharsanb.png" width="80px" alt="User avatar: dharsanb" /></a> <a href="https://github.com/railwayapp"><img src="https://github.com/railwayapp.png" width="80px" alt="User avatar: railwayapp" /></a> <a href="https://github.com/caseyamcl"><img src="https://github.com/caseyamcl.png" width="80px" alt="User avatar: caseyamcl" /></a> <a href="https://github.com/"><img src="https://raw.githubusercontent.com/JamesIves/github-sponsors-readme-action/dev/.github/assets/placeholder.png" width="80px" alt="User avatar: " /></a> <!-- sponsors-premium -->
|
||||
</p>
|
||||
<p align="center">
|
||||
<!-- sponsors-base --><a href="https://github.com/seanwash"><img src="https://github.com/seanwash.png" width="50px" alt="User avatar: seanwash" /></a> <a href="https://github.com/jerath"><img src="https://github.com/jerath.png" width="50px" alt="User avatar: jerath" /></a> <a href="https://github.com/itsa-sh"><img src="https://github.com/itsa-sh.png" width="50px" alt="User avatar: itsa-sh" /></a> <a href="https://github.com/dmmulroy"><img src="https://github.com/dmmulroy.png" width="50px" alt="User avatar: dmmulroy" /></a> <a href="https://github.com/timcole"><img src="https://github.com/timcole.png" width="50px" alt="User avatar: timcole" /></a> <a href="https://github.com/VLZH"><img src="https://github.com/VLZH.png" width="50px" alt="User avatar: VLZH" /></a> <a href="https://github.com/terasaka2k"><img src="https://github.com/terasaka2k.png" width="50px" alt="User avatar: terasaka2k" /></a> <a href="https://github.com/andriyor"><img src="https://github.com/andriyor.png" width="50px" alt="User avatar: andriyor" /></a> <a href="https://github.com/majudhu"><img src="https://github.com/majudhu.png" width="50px" alt="User avatar: majudhu" /></a> <a href="https://github.com/axelrindle"><img src="https://github.com/axelrindle.png" width="50px" alt="User avatar: axelrindle" /></a> <a href="https://github.com/jirizverina"><img src="https://github.com/jirizverina.png" width="50px" alt="User avatar: jirizverina" /></a> <a href="https://github.com/chip-well"><img src="https://github.com/chip-well.png" width="50px" alt="User avatar: chip-well" /></a> <a href="https://github.com/GRAYAH"><img src="https://github.com/GRAYAH.png" width="50px" alt="User avatar: GRAYAH" /></a> <!-- sponsors-base -->
|
||||
@@ -64,7 +64,7 @@ visit [`DEVELOPMENT.md`](DEVELOPMENT.md) for tips on setting up your environment
|
||||
## Useful Resources
|
||||
|
||||
- [Feedback and Bug Reports](https://feedback.yaak.app)
|
||||
- [Documentation](https://yaak.app/docs)
|
||||
- [Documentation](https://feedback.yaak.app/help)
|
||||
- [Yaak vs Postman](https://yaak.app/alternatives/postman)
|
||||
- [Yaak vs Bruno](https://yaak.app/alternatives/bruno)
|
||||
- [Yaak vs Insomnia](https://yaak.app/alternatives/insomnia)
|
||||
|
||||
12
biome.json
12
biome.json
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"$schema": "https://biomejs.dev/schemas/2.3.11/schema.json",
|
||||
"$schema": "https://biomejs.dev/schemas/2.3.7/schema.json",
|
||||
"linter": {
|
||||
"enabled": true,
|
||||
"rules": {
|
||||
@@ -38,16 +38,14 @@
|
||||
"!**/node_modules",
|
||||
"!**/dist",
|
||||
"!**/build",
|
||||
"!target",
|
||||
"!scripts",
|
||||
"!crates",
|
||||
"!crates-tauri",
|
||||
"!packages/plugin-runtime",
|
||||
"!packages/plugin-runtime-types",
|
||||
"!src-tauri",
|
||||
"!src-web/tailwind.config.cjs",
|
||||
"!src-web/postcss.config.cjs",
|
||||
"!src-web/vite.config.ts",
|
||||
"!src-web/routeTree.gen.ts",
|
||||
"!packages/plugin-runtime-types/lib",
|
||||
"!**/bindings"
|
||||
"!src-web/routeTree.gen.ts"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
[package]
|
||||
name = "yaak-cli"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
publish = false
|
||||
|
||||
[[bin]]
|
||||
name = "yaakcli"
|
||||
path = "src/main.rs"
|
||||
|
||||
[dependencies]
|
||||
clap = { version = "4", features = ["derive"] }
|
||||
dirs = "6"
|
||||
env_logger = "0.11"
|
||||
log = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
|
||||
yaak-actions = { workspace = true }
|
||||
yaak-actions-builtin = { workspace = true }
|
||||
yaak-crypto = { workspace = true }
|
||||
yaak-http = { workspace = true }
|
||||
yaak-models = { workspace = true }
|
||||
yaak-plugins = { workspace = true }
|
||||
yaak-templates = { workspace = true }
|
||||
@@ -1,290 +0,0 @@
|
||||
use clap::{Parser, Subcommand};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::mpsc;
|
||||
use yaak_http::sender::{HttpSender, ReqwestSender};
|
||||
use yaak_http::types::{SendableHttpRequest, SendableHttpRequestOptions};
|
||||
use yaak_models::models::HttpRequest;
|
||||
use yaak_models::util::UpdateSource;
|
||||
use yaak_plugins::events::PluginContext;
|
||||
use yaak_plugins::manager::PluginManager;
|
||||
|
||||
#[derive(Parser)]
|
||||
#[command(name = "yaakcli")]
|
||||
#[command(about = "Yaak CLI - API client from the command line")]
|
||||
struct Cli {
|
||||
/// Use a custom data directory
|
||||
#[arg(long, global = true)]
|
||||
data_dir: Option<PathBuf>,
|
||||
|
||||
/// Environment ID to use for variable substitution
|
||||
#[arg(long, short, global = true)]
|
||||
environment: Option<String>,
|
||||
|
||||
/// Enable verbose logging
|
||||
#[arg(long, short, global = true)]
|
||||
verbose: bool,
|
||||
|
||||
#[command(subcommand)]
|
||||
command: Commands,
|
||||
}
|
||||
|
||||
#[derive(Subcommand)]
|
||||
enum Commands {
|
||||
/// List all workspaces
|
||||
Workspaces,
|
||||
/// List requests in a workspace
|
||||
Requests {
|
||||
/// Workspace ID
|
||||
workspace_id: String,
|
||||
},
|
||||
/// Send an HTTP request by ID
|
||||
Send {
|
||||
/// Request ID
|
||||
request_id: String,
|
||||
},
|
||||
/// Send a GET request to a URL
|
||||
Get {
|
||||
/// URL to request
|
||||
url: String,
|
||||
},
|
||||
/// Create a new HTTP request
|
||||
Create {
|
||||
/// Workspace ID
|
||||
workspace_id: String,
|
||||
/// Request name
|
||||
#[arg(short, long)]
|
||||
name: String,
|
||||
/// HTTP method
|
||||
#[arg(short, long, default_value = "GET")]
|
||||
method: String,
|
||||
/// URL
|
||||
#[arg(short, long)]
|
||||
url: String,
|
||||
},
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let cli = Cli::parse();
|
||||
|
||||
// Initialize logging
|
||||
if cli.verbose {
|
||||
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();
|
||||
}
|
||||
|
||||
// Use the same app_id for both data directory and keyring
|
||||
let app_id = if cfg!(debug_assertions) { "app.yaak.desktop.dev" } else { "app.yaak.desktop" };
|
||||
|
||||
let data_dir = cli.data_dir.unwrap_or_else(|| {
|
||||
dirs::data_dir().expect("Could not determine data directory").join(app_id)
|
||||
});
|
||||
|
||||
let db_path = data_dir.join("db.sqlite");
|
||||
let blob_path = data_dir.join("blobs.sqlite");
|
||||
|
||||
let (query_manager, _blob_manager, _rx) =
|
||||
yaak_models::init_standalone(&db_path, &blob_path).expect("Failed to initialize database");
|
||||
|
||||
let db = query_manager.connect();
|
||||
|
||||
// Initialize plugin manager for template functions
|
||||
let vendored_plugin_dir = data_dir.join("vendored-plugins");
|
||||
let installed_plugin_dir = data_dir.join("installed-plugins");
|
||||
|
||||
// Use system node for CLI (must be in PATH)
|
||||
let node_bin_path = PathBuf::from("node");
|
||||
|
||||
// Find the plugin runtime - check YAAK_PLUGIN_RUNTIME env var, then fallback to development path
|
||||
let plugin_runtime_main =
|
||||
std::env::var("YAAK_PLUGIN_RUNTIME").map(PathBuf::from).unwrap_or_else(|_| {
|
||||
// Development fallback: look relative to crate root
|
||||
PathBuf::from(env!("CARGO_MANIFEST_DIR"))
|
||||
.join("../../crates-tauri/yaak-app/vendored/plugin-runtime/index.cjs")
|
||||
});
|
||||
|
||||
// Create plugin manager (plugins may not be available in CLI context)
|
||||
let plugin_manager = Arc::new(
|
||||
PluginManager::new(
|
||||
vendored_plugin_dir.clone(),
|
||||
installed_plugin_dir.clone(),
|
||||
node_bin_path.clone(),
|
||||
plugin_runtime_main,
|
||||
false,
|
||||
)
|
||||
.await,
|
||||
);
|
||||
|
||||
// Initialize plugins from database
|
||||
let plugins = db.list_plugins().unwrap_or_default();
|
||||
if !plugins.is_empty() {
|
||||
let errors =
|
||||
plugin_manager.initialize_all_plugins(plugins, &PluginContext::new_empty()).await;
|
||||
for (plugin_dir, error_msg) in errors {
|
||||
eprintln!("Warning: Failed to initialize plugin '{}': {}", plugin_dir, error_msg);
|
||||
}
|
||||
}
|
||||
|
||||
match cli.command {
|
||||
Commands::Workspaces => {
|
||||
let workspaces = db.list_workspaces().expect("Failed to list workspaces");
|
||||
if workspaces.is_empty() {
|
||||
println!("No workspaces found");
|
||||
} else {
|
||||
for ws in workspaces {
|
||||
println!("{} - {}", ws.id, ws.name);
|
||||
}
|
||||
}
|
||||
}
|
||||
Commands::Requests { workspace_id } => {
|
||||
let requests = db.list_http_requests(&workspace_id).expect("Failed to list requests");
|
||||
if requests.is_empty() {
|
||||
println!("No requests found in workspace {}", workspace_id);
|
||||
} else {
|
||||
for req in requests {
|
||||
println!("{} - {} {}", req.id, req.method, req.name);
|
||||
}
|
||||
}
|
||||
}
|
||||
Commands::Send { request_id } => {
|
||||
use yaak_actions::{
|
||||
ActionExecutor, ActionId, ActionParams, ActionResult, ActionTarget, CurrentContext,
|
||||
};
|
||||
use yaak_actions_builtin::{BuiltinActionDependencies, register_http_actions};
|
||||
|
||||
// Create dependencies
|
||||
let deps = BuiltinActionDependencies::new_standalone(
|
||||
&db_path,
|
||||
&blob_path,
|
||||
&app_id,
|
||||
vendored_plugin_dir.clone(),
|
||||
installed_plugin_dir.clone(),
|
||||
node_bin_path.clone(),
|
||||
)
|
||||
.await
|
||||
.expect("Failed to initialize dependencies");
|
||||
|
||||
// Create executor and register actions
|
||||
let executor = ActionExecutor::new();
|
||||
executor.register_builtin_groups().await.expect("Failed to register groups");
|
||||
register_http_actions(&executor, &deps).await.expect("Failed to register HTTP actions");
|
||||
|
||||
// Prepare context
|
||||
let context = CurrentContext {
|
||||
target: Some(ActionTarget::HttpRequest { id: request_id.clone() }),
|
||||
environment_id: cli.environment.clone(),
|
||||
workspace_id: None,
|
||||
has_window: false,
|
||||
can_prompt: false,
|
||||
};
|
||||
|
||||
// Prepare params
|
||||
let params = ActionParams {
|
||||
data: serde_json::json!({
|
||||
"render": true,
|
||||
"follow_redirects": false,
|
||||
"timeout_ms": 30000,
|
||||
}),
|
||||
};
|
||||
|
||||
// Invoke action
|
||||
let action_id = ActionId::builtin("http", "send-request");
|
||||
let result = executor.invoke(&action_id, context, params).await.expect("Action failed");
|
||||
|
||||
// Handle result
|
||||
match result {
|
||||
ActionResult::Success { data, message } => {
|
||||
if let Some(msg) = message {
|
||||
println!("{}", msg);
|
||||
}
|
||||
if let Some(data) = data {
|
||||
println!("{}", serde_json::to_string_pretty(&data).unwrap());
|
||||
}
|
||||
}
|
||||
ActionResult::RequiresInput { .. } => {
|
||||
eprintln!("Action requires input (not supported in CLI)");
|
||||
}
|
||||
ActionResult::Cancelled => {
|
||||
eprintln!("Action cancelled");
|
||||
}
|
||||
}
|
||||
}
|
||||
Commands::Get { url } => {
|
||||
if cli.verbose {
|
||||
println!("> GET {}", url);
|
||||
}
|
||||
|
||||
// Build a simple GET request
|
||||
let sendable = SendableHttpRequest {
|
||||
url: url.clone(),
|
||||
method: "GET".to_string(),
|
||||
headers: vec![],
|
||||
body: None,
|
||||
options: SendableHttpRequestOptions::default(),
|
||||
};
|
||||
|
||||
// Create event channel for progress
|
||||
let (event_tx, mut event_rx) = mpsc::channel(100);
|
||||
|
||||
// Spawn task to print events if verbose
|
||||
let verbose = cli.verbose;
|
||||
let verbose_handle = if verbose {
|
||||
Some(tokio::spawn(async move {
|
||||
while let Some(event) = event_rx.recv().await {
|
||||
println!("{}", event);
|
||||
}
|
||||
}))
|
||||
} else {
|
||||
tokio::spawn(async move { while event_rx.recv().await.is_some() {} });
|
||||
None
|
||||
};
|
||||
|
||||
// Send the request
|
||||
let sender = ReqwestSender::new().expect("Failed to create HTTP client");
|
||||
let response = sender.send(sendable, event_tx).await.expect("Failed to send request");
|
||||
|
||||
if let Some(handle) = verbose_handle {
|
||||
let _ = handle.await;
|
||||
}
|
||||
|
||||
// Print response
|
||||
if verbose {
|
||||
println!();
|
||||
}
|
||||
println!(
|
||||
"HTTP {} {}",
|
||||
response.status,
|
||||
response.status_reason.as_deref().unwrap_or("")
|
||||
);
|
||||
|
||||
if verbose {
|
||||
for (name, value) in &response.headers {
|
||||
println!("{}: {}", name, value);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
// Print body
|
||||
let (body, _stats) = response.text().await.expect("Failed to read response body");
|
||||
println!("{}", body);
|
||||
}
|
||||
Commands::Create { workspace_id, name, method, url } => {
|
||||
let request = HttpRequest {
|
||||
workspace_id,
|
||||
name,
|
||||
method: method.to_uppercase(),
|
||||
url,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let created = db
|
||||
.upsert_http_request(&request, &UpdateSource::Sync)
|
||||
.expect("Failed to create request");
|
||||
|
||||
println!("Created request: {}", created.id);
|
||||
}
|
||||
}
|
||||
|
||||
// Terminate plugin manager gracefully
|
||||
plugin_manager.terminate().await;
|
||||
}
|
||||
@@ -1,3 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
export type WatchResult = { unlistenEvent: string, };
|
||||
@@ -1,5 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
export type PluginUpdateInfo = { name: string, currentVersion: string, latestVersion: string, };
|
||||
|
||||
export type PluginUpdateNotification = { updateCount: number, plugins: Array<PluginUpdateInfo>, };
|
||||
@@ -1,13 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<!-- Enable for NodeJS/V8 JIT compiler -->
|
||||
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
|
||||
<true/>
|
||||
|
||||
<!-- Allow loading plugins signed with different Team IDs (e.g., 1Password) -->
|
||||
<key>com.apple.security.cs.disable-library-validation</key>
|
||||
<true/>
|
||||
</dict>
|
||||
</plist>
|
||||
@@ -1,6 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
</dict>
|
||||
</plist>
|
||||
@@ -1,115 +0,0 @@
|
||||
use crate::PluginContextExt;
|
||||
use crate::error::Result;
|
||||
use std::sync::Arc;
|
||||
use tauri::{AppHandle, Manager, Runtime, State, WebviewWindow, command};
|
||||
use tauri_plugin_dialog::{DialogExt, MessageDialogKind};
|
||||
use yaak_crypto::manager::EncryptionManager;
|
||||
use yaak_models::models::HttpRequestHeader;
|
||||
use yaak_models::queries::workspaces::default_headers;
|
||||
use yaak_plugins::events::GetThemesResponse;
|
||||
use yaak_plugins::manager::PluginManager;
|
||||
use yaak_plugins::native_template_functions::{
|
||||
decrypt_secure_template_function, encrypt_secure_template_function,
|
||||
};
|
||||
|
||||
/// Extension trait for accessing the EncryptionManager from Tauri Manager types.
|
||||
pub trait EncryptionManagerExt<'a, R> {
|
||||
fn crypto(&'a self) -> State<'a, EncryptionManager>;
|
||||
}
|
||||
|
||||
impl<'a, R: Runtime, M: Manager<R>> EncryptionManagerExt<'a, R> for M {
|
||||
fn crypto(&'a self) -> State<'a, EncryptionManager> {
|
||||
self.state::<EncryptionManager>()
|
||||
}
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) async fn cmd_show_workspace_key<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
workspace_id: &str,
|
||||
) -> Result<()> {
|
||||
let key = window.crypto().reveal_workspace_key(workspace_id)?;
|
||||
window
|
||||
.dialog()
|
||||
.message(format!("Your workspace key is \n\n{}", key))
|
||||
.kind(MessageDialogKind::Info)
|
||||
.show(|_v| {});
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) async fn cmd_decrypt_template<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
template: &str,
|
||||
) -> Result<String> {
|
||||
let encryption_manager = window.app_handle().state::<EncryptionManager>();
|
||||
let plugin_context = window.plugin_context();
|
||||
Ok(decrypt_secure_template_function(&encryption_manager, &plugin_context, template)?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) async fn cmd_secure_template<R: Runtime>(
|
||||
app_handle: AppHandle<R>,
|
||||
window: WebviewWindow<R>,
|
||||
template: &str,
|
||||
) -> Result<String> {
|
||||
let plugin_manager = Arc::new((*app_handle.state::<PluginManager>()).clone());
|
||||
let encryption_manager = Arc::new((*app_handle.state::<EncryptionManager>()).clone());
|
||||
let plugin_context = window.plugin_context();
|
||||
Ok(encrypt_secure_template_function(
|
||||
plugin_manager,
|
||||
encryption_manager,
|
||||
&plugin_context,
|
||||
template,
|
||||
)?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) async fn cmd_get_themes<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
plugin_manager: State<'_, PluginManager>,
|
||||
) -> Result<Vec<GetThemesResponse>> {
|
||||
Ok(plugin_manager.get_themes(&window.plugin_context()).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) async fn cmd_enable_encryption<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
workspace_id: &str,
|
||||
) -> Result<()> {
|
||||
window.crypto().ensure_workspace_key(workspace_id)?;
|
||||
window.crypto().reveal_workspace_key(workspace_id)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) async fn cmd_reveal_workspace_key<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
workspace_id: &str,
|
||||
) -> Result<String> {
|
||||
Ok(window.crypto().reveal_workspace_key(workspace_id)?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) async fn cmd_set_workspace_key<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
workspace_id: &str,
|
||||
key: &str,
|
||||
) -> Result<()> {
|
||||
window.crypto().set_human_key(workspace_id, key)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) async fn cmd_disable_encryption<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
workspace_id: &str,
|
||||
) -> Result<()> {
|
||||
window.crypto().disable_encryption(workspace_id)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub(crate) fn cmd_default_headers() -> Vec<HttpRequestHeader> {
|
||||
default_headers()
|
||||
}
|
||||
@@ -1,130 +0,0 @@
|
||||
//! Tauri-specific extensions for yaak-git.
|
||||
//!
|
||||
//! This module provides the Tauri commands for git functionality.
|
||||
|
||||
use crate::error::Result;
|
||||
use std::path::{Path, PathBuf};
|
||||
use tauri::command;
|
||||
use yaak_git::{
|
||||
BranchDeleteResult, CloneResult, GitCommit, GitRemote, GitStatusSummary, PullResult,
|
||||
PushResult, git_add, git_add_credential, git_add_remote, git_checkout_branch, git_clone,
|
||||
git_commit, git_create_branch, git_delete_branch, git_delete_remote_branch, git_fetch_all,
|
||||
git_init, git_log, git_merge_branch, git_pull, git_push, git_remotes, git_rename_branch,
|
||||
git_rm_remote, git_status, git_unstage,
|
||||
};
|
||||
|
||||
// NOTE: All of these commands are async to prevent blocking work from locking up the UI
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_checkout(dir: &Path, branch: &str, force: bool) -> Result<String> {
|
||||
Ok(git_checkout_branch(dir, branch, force).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_branch(dir: &Path, branch: &str, base: Option<&str>) -> Result<()> {
|
||||
Ok(git_create_branch(dir, branch, base).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_delete_branch(
|
||||
dir: &Path,
|
||||
branch: &str,
|
||||
force: Option<bool>,
|
||||
) -> Result<BranchDeleteResult> {
|
||||
Ok(git_delete_branch(dir, branch, force.unwrap_or(false)).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_delete_remote_branch(dir: &Path, branch: &str) -> Result<()> {
|
||||
Ok(git_delete_remote_branch(dir, branch).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_merge_branch(dir: &Path, branch: &str) -> Result<()> {
|
||||
Ok(git_merge_branch(dir, branch).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_rename_branch(dir: &Path, old_name: &str, new_name: &str) -> Result<()> {
|
||||
Ok(git_rename_branch(dir, old_name, new_name).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_status(dir: &Path) -> Result<GitStatusSummary> {
|
||||
Ok(git_status(dir)?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_log(dir: &Path) -> Result<Vec<GitCommit>> {
|
||||
Ok(git_log(dir)?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_initialize(dir: &Path) -> Result<()> {
|
||||
Ok(git_init(dir)?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_clone(url: &str, dir: &Path) -> Result<CloneResult> {
|
||||
Ok(git_clone(url, dir).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_commit(dir: &Path, message: &str) -> Result<()> {
|
||||
Ok(git_commit(dir, message).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_fetch_all(dir: &Path) -> Result<()> {
|
||||
Ok(git_fetch_all(dir).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_push(dir: &Path) -> Result<PushResult> {
|
||||
Ok(git_push(dir).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_pull(dir: &Path) -> Result<PullResult> {
|
||||
Ok(git_pull(dir).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_add(dir: &Path, rela_paths: Vec<PathBuf>) -> Result<()> {
|
||||
for path in rela_paths {
|
||||
git_add(dir, &path)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_unstage(dir: &Path, rela_paths: Vec<PathBuf>) -> Result<()> {
|
||||
for path in rela_paths {
|
||||
git_unstage(dir, &path)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_add_credential(
|
||||
remote_url: &str,
|
||||
username: &str,
|
||||
password: &str,
|
||||
) -> Result<()> {
|
||||
Ok(git_add_credential(remote_url, username, password).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_remotes(dir: &Path) -> Result<Vec<GitRemote>> {
|
||||
Ok(git_remotes(dir)?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_add_remote(dir: &Path, name: &str, url: &str) -> Result<GitRemote> {
|
||||
Ok(git_add_remote(dir, name, url)?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_git_rm_remote(dir: &Path, name: &str) -> Result<()> {
|
||||
Ok(git_rm_remote(dir, name)?)
|
||||
}
|
||||
@@ -1,709 +0,0 @@
|
||||
use crate::PluginContextExt;
|
||||
use crate::error::Error::GenericError;
|
||||
use crate::error::Result;
|
||||
use crate::models_ext::BlobManagerExt;
|
||||
use crate::models_ext::QueryManagerExt;
|
||||
use crate::render::render_http_request;
|
||||
use log::{debug, warn};
|
||||
use std::pin::Pin;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicI32, Ordering};
|
||||
use std::time::{Duration, Instant};
|
||||
use tauri::{AppHandle, Manager, Runtime, WebviewWindow};
|
||||
use tokio::fs::{File, create_dir_all};
|
||||
use tokio::io::{AsyncRead, AsyncReadExt, AsyncWriteExt};
|
||||
use tokio::sync::watch::Receiver;
|
||||
use tokio_util::bytes::Bytes;
|
||||
use yaak_crypto::manager::EncryptionManager;
|
||||
use yaak_http::client::{
|
||||
HttpConnectionOptions, HttpConnectionProxySetting, HttpConnectionProxySettingAuth,
|
||||
};
|
||||
use yaak_http::cookies::CookieStore;
|
||||
use yaak_http::manager::{CachedClient, HttpConnectionManager};
|
||||
use yaak_http::sender::ReqwestSender;
|
||||
use yaak_http::tee_reader::TeeReader;
|
||||
use yaak_http::transaction::HttpTransaction;
|
||||
use yaak_http::types::{
|
||||
SendableBody, SendableHttpRequest, SendableHttpRequestOptions, append_query_params,
|
||||
};
|
||||
use yaak_models::blob_manager::BodyChunk;
|
||||
use yaak_models::models::{
|
||||
CookieJar, Environment, HttpRequest, HttpResponse, HttpResponseEvent, HttpResponseHeader,
|
||||
HttpResponseState, ProxySetting, ProxySettingAuth,
|
||||
};
|
||||
use yaak_models::util::UpdateSource;
|
||||
use yaak_plugins::events::{
|
||||
CallHttpAuthenticationRequest, HttpHeader, PluginContext, RenderPurpose,
|
||||
};
|
||||
use yaak_plugins::manager::PluginManager;
|
||||
use yaak_plugins::template_callback::PluginTemplateCallback;
|
||||
use yaak_templates::RenderOptions;
|
||||
use yaak_tls::find_client_certificate;
|
||||
|
||||
/// Chunk size for storing request bodies (1MB)
|
||||
const REQUEST_BODY_CHUNK_SIZE: usize = 1024 * 1024;
|
||||
|
||||
/// Context for managing response state during HTTP transactions.
|
||||
/// Handles both persisted responses (stored in DB) and ephemeral responses (in-memory only).
|
||||
struct ResponseContext<R: Runtime> {
|
||||
app_handle: AppHandle<R>,
|
||||
response: HttpResponse,
|
||||
update_source: UpdateSource,
|
||||
}
|
||||
|
||||
impl<R: Runtime> ResponseContext<R> {
|
||||
fn new(app_handle: AppHandle<R>, response: HttpResponse, update_source: UpdateSource) -> Self {
|
||||
Self { app_handle, response, update_source }
|
||||
}
|
||||
|
||||
/// Whether this response is persisted (has a non-empty ID)
|
||||
fn is_persisted(&self) -> bool {
|
||||
!self.response.id.is_empty()
|
||||
}
|
||||
|
||||
/// Update the response state. For persisted responses, fetches from DB, applies the
|
||||
/// closure, and updates the DB. For ephemeral responses, just applies the closure
|
||||
/// to the in-memory response.
|
||||
fn update<F>(&mut self, func: F) -> Result<()>
|
||||
where
|
||||
F: FnOnce(&mut HttpResponse),
|
||||
{
|
||||
if self.is_persisted() {
|
||||
let r = self.app_handle.with_tx(|tx| {
|
||||
let mut r = tx.get_http_response(&self.response.id)?;
|
||||
func(&mut r);
|
||||
tx.update_http_response_if_id(&r, &self.update_source)?;
|
||||
Ok(r)
|
||||
})?;
|
||||
self.response = r;
|
||||
Ok(())
|
||||
} else {
|
||||
func(&mut self.response);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the current response state
|
||||
fn response(&self) -> &HttpResponse {
|
||||
&self.response
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn send_http_request<R: Runtime>(
|
||||
window: &WebviewWindow<R>,
|
||||
unrendered_request: &HttpRequest,
|
||||
og_response: &HttpResponse,
|
||||
environment: Option<Environment>,
|
||||
cookie_jar: Option<CookieJar>,
|
||||
cancelled_rx: &mut Receiver<bool>,
|
||||
) -> Result<HttpResponse> {
|
||||
send_http_request_with_context(
|
||||
window,
|
||||
unrendered_request,
|
||||
og_response,
|
||||
environment,
|
||||
cookie_jar,
|
||||
cancelled_rx,
|
||||
&window.plugin_context(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn send_http_request_with_context<R: Runtime>(
|
||||
window: &WebviewWindow<R>,
|
||||
unrendered_request: &HttpRequest,
|
||||
og_response: &HttpResponse,
|
||||
environment: Option<Environment>,
|
||||
cookie_jar: Option<CookieJar>,
|
||||
cancelled_rx: &Receiver<bool>,
|
||||
plugin_context: &PluginContext,
|
||||
) -> Result<HttpResponse> {
|
||||
let app_handle = window.app_handle().clone();
|
||||
let update_source = UpdateSource::from_window_label(window.label());
|
||||
let mut response_ctx =
|
||||
ResponseContext::new(app_handle.clone(), og_response.clone(), update_source);
|
||||
|
||||
// Execute the inner send logic and handle errors consistently
|
||||
let start = Instant::now();
|
||||
let result = send_http_request_inner(
|
||||
window,
|
||||
unrendered_request,
|
||||
environment,
|
||||
cookie_jar,
|
||||
cancelled_rx,
|
||||
plugin_context,
|
||||
&mut response_ctx,
|
||||
)
|
||||
.await;
|
||||
|
||||
match result {
|
||||
Ok(response) => Ok(response),
|
||||
Err(e) => {
|
||||
let error = e.to_string();
|
||||
let elapsed = start.elapsed().as_millis() as i32;
|
||||
warn!("Failed to send request: {error:?}");
|
||||
let _ = response_ctx.update(|r| {
|
||||
r.state = HttpResponseState::Closed;
|
||||
r.elapsed = elapsed;
|
||||
if r.elapsed_headers == 0 {
|
||||
r.elapsed_headers = elapsed;
|
||||
}
|
||||
r.error = Some(error);
|
||||
});
|
||||
Ok(response_ctx.response().clone())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn send_http_request_inner<R: Runtime>(
|
||||
window: &WebviewWindow<R>,
|
||||
unrendered_request: &HttpRequest,
|
||||
environment: Option<Environment>,
|
||||
cookie_jar: Option<CookieJar>,
|
||||
cancelled_rx: &Receiver<bool>,
|
||||
plugin_context: &PluginContext,
|
||||
response_ctx: &mut ResponseContext<R>,
|
||||
) -> Result<HttpResponse> {
|
||||
let app_handle = window.app_handle().clone();
|
||||
let plugin_manager = Arc::new((*app_handle.state::<PluginManager>()).clone());
|
||||
let encryption_manager = Arc::new((*app_handle.state::<EncryptionManager>()).clone());
|
||||
let connection_manager = app_handle.state::<HttpConnectionManager>();
|
||||
let settings = window.db().get_settings();
|
||||
let workspace_id = &unrendered_request.workspace_id;
|
||||
let folder_id = unrendered_request.folder_id.as_deref();
|
||||
let environment_id = environment.map(|e| e.id);
|
||||
let workspace = window.db().get_workspace(workspace_id)?;
|
||||
let (resolved, auth_context_id) = resolve_http_request(window, unrendered_request)?;
|
||||
let cb = PluginTemplateCallback::new(
|
||||
plugin_manager.clone(),
|
||||
encryption_manager.clone(),
|
||||
&plugin_context,
|
||||
RenderPurpose::Send,
|
||||
);
|
||||
let env_chain =
|
||||
window.db().resolve_environments(&workspace.id, folder_id, environment_id.as_deref())?;
|
||||
let request = render_http_request(&resolved, env_chain, &cb, &RenderOptions::throw()).await?;
|
||||
|
||||
// Build the sendable request using the new SendableHttpRequest type
|
||||
let options = SendableHttpRequestOptions {
|
||||
follow_redirects: workspace.setting_follow_redirects,
|
||||
timeout: if workspace.setting_request_timeout > 0 {
|
||||
Some(Duration::from_millis(workspace.setting_request_timeout.unsigned_abs() as u64))
|
||||
} else {
|
||||
None
|
||||
},
|
||||
};
|
||||
let mut sendable_request = SendableHttpRequest::from_http_request(&request, options).await?;
|
||||
|
||||
debug!("Sending request to {} {}", sendable_request.method, sendable_request.url);
|
||||
|
||||
let proxy_setting = match settings.proxy {
|
||||
None => HttpConnectionProxySetting::System,
|
||||
Some(ProxySetting::Disabled) => HttpConnectionProxySetting::Disabled,
|
||||
Some(ProxySetting::Enabled { http, https, auth, bypass, disabled }) => {
|
||||
if disabled {
|
||||
HttpConnectionProxySetting::System
|
||||
} else {
|
||||
HttpConnectionProxySetting::Enabled {
|
||||
http,
|
||||
https,
|
||||
bypass,
|
||||
auth: match auth {
|
||||
None => None,
|
||||
Some(ProxySettingAuth { user, password }) => {
|
||||
Some(HttpConnectionProxySettingAuth { user, password })
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let client_certificate =
|
||||
find_client_certificate(&sendable_request.url, &settings.client_certificates);
|
||||
|
||||
// Create cookie store if a cookie jar is specified
|
||||
let maybe_cookie_store = match cookie_jar.clone() {
|
||||
Some(CookieJar { id, .. }) => {
|
||||
// NOTE: We need to refetch the cookie jar because a chained request might have
|
||||
// updated cookies when we rendered the request.
|
||||
let cj = window.db().get_cookie_jar(&id)?;
|
||||
let cookie_store = CookieStore::from_cookies(cj.cookies.clone());
|
||||
Some((cookie_store, cj))
|
||||
}
|
||||
None => None,
|
||||
};
|
||||
|
||||
let cached_client = connection_manager
|
||||
.get_client(&HttpConnectionOptions {
|
||||
id: plugin_context.id.clone(),
|
||||
validate_certificates: workspace.setting_validate_certificates,
|
||||
proxy: proxy_setting,
|
||||
client_certificate,
|
||||
dns_overrides: workspace.setting_dns_overrides.clone(),
|
||||
})
|
||||
.await?;
|
||||
|
||||
// Apply authentication to the request
|
||||
apply_authentication(
|
||||
&window,
|
||||
&mut sendable_request,
|
||||
&request,
|
||||
auth_context_id,
|
||||
&plugin_manager,
|
||||
plugin_context,
|
||||
)
|
||||
.await?;
|
||||
|
||||
let cookie_store = maybe_cookie_store.as_ref().map(|(cs, _)| cs.clone());
|
||||
let result = execute_transaction(
|
||||
cached_client,
|
||||
sendable_request,
|
||||
response_ctx,
|
||||
cancelled_rx.clone(),
|
||||
cookie_store,
|
||||
)
|
||||
.await;
|
||||
|
||||
// Wait for blob writing to complete and check for errors
|
||||
let final_result = match result {
|
||||
Ok((response, maybe_blob_write_handle)) => {
|
||||
// Check if blob writing failed
|
||||
if let Some(handle) = maybe_blob_write_handle {
|
||||
if let Ok(Err(e)) = handle.await {
|
||||
// Update response with the storage error
|
||||
let _ = response_ctx.update(|r| {
|
||||
let error_msg =
|
||||
format!("Request succeeded but failed to store request body: {}", e);
|
||||
r.error = Some(match &r.error {
|
||||
Some(existing) => format!("{}; {}", existing, error_msg),
|
||||
None => error_msg,
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
Ok(response)
|
||||
}
|
||||
Err(e) => Err(e),
|
||||
};
|
||||
|
||||
// Persist cookies back to the database after the request completes
|
||||
if let Some((cookie_store, mut cj)) = maybe_cookie_store {
|
||||
let cookies = cookie_store.get_all_cookies();
|
||||
cj.cookies = cookies;
|
||||
if let Err(e) = window.db().upsert_cookie_jar(&cj, &UpdateSource::Background) {
|
||||
warn!("Failed to persist cookies to database: {}", e);
|
||||
}
|
||||
}
|
||||
|
||||
final_result
|
||||
}
|
||||
|
||||
pub fn resolve_http_request<R: Runtime>(
|
||||
window: &WebviewWindow<R>,
|
||||
request: &HttpRequest,
|
||||
) -> Result<(HttpRequest, String)> {
|
||||
let mut new_request = request.clone();
|
||||
|
||||
let (authentication_type, authentication, authentication_context_id) =
|
||||
window.db().resolve_auth_for_http_request(request)?;
|
||||
new_request.authentication_type = authentication_type;
|
||||
new_request.authentication = authentication;
|
||||
|
||||
let headers = window.db().resolve_headers_for_http_request(request)?;
|
||||
new_request.headers = headers;
|
||||
|
||||
Ok((new_request, authentication_context_id))
|
||||
}
|
||||
|
||||
async fn execute_transaction<R: Runtime>(
|
||||
cached_client: CachedClient,
|
||||
mut sendable_request: SendableHttpRequest,
|
||||
response_ctx: &mut ResponseContext<R>,
|
||||
mut cancelled_rx: Receiver<bool>,
|
||||
cookie_store: Option<CookieStore>,
|
||||
) -> Result<(HttpResponse, Option<tauri::async_runtime::JoinHandle<Result<()>>>)> {
|
||||
let app_handle = &response_ctx.app_handle.clone();
|
||||
let response_id = response_ctx.response().id.clone();
|
||||
let workspace_id = response_ctx.response().workspace_id.clone();
|
||||
let is_persisted = response_ctx.is_persisted();
|
||||
|
||||
// Keep a reference to the resolver for DNS timing events
|
||||
let resolver = cached_client.resolver.clone();
|
||||
|
||||
let sender = ReqwestSender::with_client(cached_client.client);
|
||||
let transaction = match cookie_store {
|
||||
Some(cs) => HttpTransaction::with_cookie_store(sender, cs),
|
||||
None => HttpTransaction::new(sender),
|
||||
};
|
||||
let start = Instant::now();
|
||||
|
||||
// Capture request headers before sending
|
||||
let request_headers: Vec<HttpResponseHeader> = sendable_request
|
||||
.headers
|
||||
.iter()
|
||||
.map(|(name, value)| HttpResponseHeader { name: name.clone(), value: value.clone() })
|
||||
.collect();
|
||||
|
||||
// Update response with headers info
|
||||
response_ctx.update(|r| {
|
||||
r.url = sendable_request.url.clone();
|
||||
r.request_headers = request_headers;
|
||||
})?;
|
||||
|
||||
// Create bounded channel for receiving events and spawn a task to store them in DB
|
||||
// Buffer size of 100 events provides back pressure if DB writes are slow
|
||||
let (event_tx, mut event_rx) =
|
||||
tokio::sync::mpsc::channel::<yaak_http::sender::HttpResponseEvent>(100);
|
||||
|
||||
// Set the event sender on the DNS resolver so it can emit DNS timing events
|
||||
resolver.set_event_sender(Some(event_tx.clone())).await;
|
||||
|
||||
// Shared state to capture DNS timing from the event processing task
|
||||
let dns_elapsed = Arc::new(AtomicI32::new(0));
|
||||
|
||||
// Write events to DB in a task (only for persisted responses)
|
||||
if is_persisted {
|
||||
let response_id = response_id.clone();
|
||||
let app_handle = app_handle.clone();
|
||||
let update_source = response_ctx.update_source.clone();
|
||||
let workspace_id = workspace_id.clone();
|
||||
let dns_elapsed = dns_elapsed.clone();
|
||||
tokio::spawn(async move {
|
||||
while let Some(event) = event_rx.recv().await {
|
||||
// Capture DNS timing when we see a DNS event
|
||||
if let yaak_http::sender::HttpResponseEvent::DnsResolved { duration, .. } = &event {
|
||||
dns_elapsed.store(*duration as i32, Ordering::SeqCst);
|
||||
}
|
||||
let db_event = HttpResponseEvent::new(&response_id, &workspace_id, event.into());
|
||||
let _ = app_handle.db().upsert_http_response_event(&db_event, &update_source);
|
||||
}
|
||||
});
|
||||
} else {
|
||||
// For ephemeral responses, just drain the events but still capture DNS timing
|
||||
let dns_elapsed = dns_elapsed.clone();
|
||||
tokio::spawn(async move {
|
||||
while let Some(event) = event_rx.recv().await {
|
||||
if let yaak_http::sender::HttpResponseEvent::DnsResolved { duration, .. } = &event {
|
||||
dns_elapsed.store(*duration as i32, Ordering::SeqCst);
|
||||
}
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
// Capture request body as it's sent (only for persisted responses)
|
||||
let body_id = format!("{}.request", response_id);
|
||||
let maybe_blob_write_handle = match sendable_request.body {
|
||||
Some(SendableBody::Bytes(bytes)) => {
|
||||
if is_persisted {
|
||||
write_bytes_to_db_sync(response_ctx, &body_id, bytes.clone())?;
|
||||
}
|
||||
sendable_request.body = Some(SendableBody::Bytes(bytes));
|
||||
None
|
||||
}
|
||||
Some(SendableBody::Stream(stream)) => {
|
||||
// Wrap stream with TeeReader to capture data as it's read
|
||||
// Use unbounded channel to ensure all data is captured without blocking the HTTP request
|
||||
let (body_chunk_tx, body_chunk_rx) = tokio::sync::mpsc::unbounded_channel::<Vec<u8>>();
|
||||
let tee_reader = TeeReader::new(stream, body_chunk_tx);
|
||||
let pinned: Pin<Box<dyn AsyncRead + Send + 'static>> = Box::pin(tee_reader);
|
||||
|
||||
let handle = if is_persisted {
|
||||
// Spawn task to write request body chunks to blob DB
|
||||
let app_handle = app_handle.clone();
|
||||
let response_id = response_id.clone();
|
||||
let workspace_id = workspace_id.clone();
|
||||
let body_id = body_id.clone();
|
||||
let update_source = response_ctx.update_source.clone();
|
||||
Some(tauri::async_runtime::spawn(async move {
|
||||
write_stream_chunks_to_db(
|
||||
app_handle,
|
||||
&body_id,
|
||||
&workspace_id,
|
||||
&response_id,
|
||||
&update_source,
|
||||
body_chunk_rx,
|
||||
)
|
||||
.await
|
||||
}))
|
||||
} else {
|
||||
// For ephemeral responses, just drain the body chunks
|
||||
tauri::async_runtime::spawn(async move {
|
||||
let mut rx = body_chunk_rx;
|
||||
while rx.recv().await.is_some() {}
|
||||
});
|
||||
None
|
||||
};
|
||||
|
||||
sendable_request.body = Some(SendableBody::Stream(pinned));
|
||||
handle
|
||||
}
|
||||
None => {
|
||||
sendable_request.body = None;
|
||||
None
|
||||
}
|
||||
};
|
||||
|
||||
// Execute the transaction with cancellation support
|
||||
// This returns the response with headers, but body is not yet consumed
|
||||
// Events (headers, settings, chunks) are sent through the channel
|
||||
let mut http_response = transaction
|
||||
.execute_with_cancellation(sendable_request, cancelled_rx.clone(), event_tx)
|
||||
.await?;
|
||||
|
||||
// Prepare the response path before consuming the body
|
||||
let body_path = if response_id.is_empty() {
|
||||
// Ephemeral responses: use OS temp directory for automatic cleanup
|
||||
let temp_dir = std::env::temp_dir().join("yaak-ephemeral-responses");
|
||||
create_dir_all(&temp_dir).await?;
|
||||
temp_dir.join(uuid::Uuid::new_v4().to_string())
|
||||
} else {
|
||||
// Persisted responses: use app data directory
|
||||
let dir = app_handle.path().app_data_dir()?;
|
||||
let base_dir = dir.join("responses");
|
||||
create_dir_all(&base_dir).await?;
|
||||
base_dir.join(&response_id)
|
||||
};
|
||||
|
||||
// Extract metadata before consuming the body (headers are available immediately)
|
||||
// Url might change, so update again
|
||||
response_ctx.update(|r| {
|
||||
r.body_path = Some(body_path.to_string_lossy().to_string());
|
||||
r.elapsed_headers = start.elapsed().as_millis() as i32;
|
||||
r.status = http_response.status as i32;
|
||||
r.status_reason = http_response.status_reason.clone();
|
||||
r.url = http_response.url.clone();
|
||||
r.remote_addr = http_response.remote_addr.clone();
|
||||
r.version = http_response.version.clone();
|
||||
r.headers = http_response
|
||||
.headers
|
||||
.iter()
|
||||
.map(|(name, value)| HttpResponseHeader { name: name.clone(), value: value.clone() })
|
||||
.collect();
|
||||
r.content_length = http_response.content_length.map(|l| l as i32);
|
||||
r.state = HttpResponseState::Connected;
|
||||
r.request_headers = http_response
|
||||
.request_headers
|
||||
.iter()
|
||||
.map(|(n, v)| HttpResponseHeader { name: n.clone(), value: v.clone() })
|
||||
.collect();
|
||||
})?;
|
||||
|
||||
// Get the body stream for manual consumption
|
||||
let mut body_stream = http_response.into_body_stream()?;
|
||||
|
||||
// Open file for writing
|
||||
let mut file = File::options()
|
||||
.create(true)
|
||||
.truncate(true)
|
||||
.write(true)
|
||||
.open(&body_path)
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("Failed to open file: {}", e)))?;
|
||||
|
||||
// Stream body to file, with throttled DB updates to avoid excessive writes
|
||||
let mut written_bytes: usize = 0;
|
||||
let mut last_update_time = start;
|
||||
let mut buf = [0u8; 8192];
|
||||
|
||||
// Throttle settings: update DB at most every 100ms
|
||||
const UPDATE_INTERVAL_MS: u128 = 100;
|
||||
|
||||
loop {
|
||||
// Check for cancellation. If we already have headers/body, just close cleanly without error
|
||||
if *cancelled_rx.borrow() {
|
||||
break;
|
||||
}
|
||||
|
||||
// Use select! to race between reading and cancellation, so cancellation is immediate
|
||||
let read_result = tokio::select! {
|
||||
biased;
|
||||
_ = cancelled_rx.changed() => {
|
||||
break;
|
||||
}
|
||||
result = body_stream.read(&mut buf) => result,
|
||||
};
|
||||
|
||||
match read_result {
|
||||
Ok(0) => break, // EOF
|
||||
Ok(n) => {
|
||||
file.write_all(&buf[..n])
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("Failed to write to file: {}", e)))?;
|
||||
file.flush()
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("Failed to flush file: {}", e)))?;
|
||||
written_bytes += n;
|
||||
|
||||
// Throttle DB updates: only update if enough time has passed
|
||||
let now = Instant::now();
|
||||
let elapsed_since_update = now.duration_since(last_update_time).as_millis();
|
||||
|
||||
if elapsed_since_update >= UPDATE_INTERVAL_MS {
|
||||
response_ctx.update(|r| {
|
||||
r.elapsed = start.elapsed().as_millis() as i32;
|
||||
r.content_length = Some(written_bytes as i32);
|
||||
})?;
|
||||
last_update_time = now;
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
return Err(GenericError(format!("Failed to read response body: {}", e)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Final update with closed state and accurate byte count
|
||||
response_ctx.update(|r| {
|
||||
r.elapsed = start.elapsed().as_millis() as i32;
|
||||
r.elapsed_dns = dns_elapsed.load(Ordering::SeqCst);
|
||||
r.content_length = Some(written_bytes as i32);
|
||||
r.state = HttpResponseState::Closed;
|
||||
})?;
|
||||
|
||||
// Clear the event sender from the resolver since this request is done
|
||||
resolver.set_event_sender(None).await;
|
||||
|
||||
Ok((response_ctx.response().clone(), maybe_blob_write_handle))
|
||||
}
|
||||
|
||||
fn write_bytes_to_db_sync<R: Runtime>(
|
||||
response_ctx: &mut ResponseContext<R>,
|
||||
body_id: &str,
|
||||
data: Bytes,
|
||||
) -> Result<()> {
|
||||
if data.is_empty() {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Write in chunks if data is large
|
||||
let mut offset = 0;
|
||||
let mut chunk_index = 0;
|
||||
while offset < data.len() {
|
||||
let end = std::cmp::min(offset + REQUEST_BODY_CHUNK_SIZE, data.len());
|
||||
let chunk_data = data.slice(offset..end).to_vec();
|
||||
let chunk = BodyChunk::new(body_id, chunk_index, chunk_data);
|
||||
response_ctx.app_handle.blobs().insert_chunk(&chunk)?;
|
||||
offset = end;
|
||||
chunk_index += 1;
|
||||
}
|
||||
|
||||
// Update the response with the total request body size
|
||||
response_ctx.update(|r| {
|
||||
r.request_content_length = Some(data.len() as i32);
|
||||
})?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn write_stream_chunks_to_db<R: Runtime>(
|
||||
app_handle: AppHandle<R>,
|
||||
body_id: &str,
|
||||
workspace_id: &str,
|
||||
response_id: &str,
|
||||
update_source: &UpdateSource,
|
||||
mut rx: tokio::sync::mpsc::UnboundedReceiver<Vec<u8>>,
|
||||
) -> Result<()> {
|
||||
let mut buffer = Vec::with_capacity(REQUEST_BODY_CHUNK_SIZE);
|
||||
let mut chunk_index = 0;
|
||||
let mut total_bytes: usize = 0;
|
||||
|
||||
while let Some(data) = rx.recv().await {
|
||||
total_bytes += data.len();
|
||||
buffer.extend_from_slice(&data);
|
||||
|
||||
// Flush when buffer reaches chunk size
|
||||
while buffer.len() >= REQUEST_BODY_CHUNK_SIZE {
|
||||
debug!("Writing chunk {chunk_index} to DB");
|
||||
let chunk_data: Vec<u8> = buffer.drain(..REQUEST_BODY_CHUNK_SIZE).collect();
|
||||
let chunk = BodyChunk::new(body_id, chunk_index, chunk_data);
|
||||
app_handle.blobs().insert_chunk(&chunk)?;
|
||||
app_handle.db().upsert_http_response_event(
|
||||
&HttpResponseEvent::new(
|
||||
response_id,
|
||||
workspace_id,
|
||||
yaak_http::sender::HttpResponseEvent::ChunkSent {
|
||||
bytes: REQUEST_BODY_CHUNK_SIZE,
|
||||
}
|
||||
.into(),
|
||||
),
|
||||
update_source,
|
||||
)?;
|
||||
chunk_index += 1;
|
||||
}
|
||||
}
|
||||
|
||||
// Flush remaining data
|
||||
if !buffer.is_empty() {
|
||||
let chunk = BodyChunk::new(body_id, chunk_index, buffer);
|
||||
debug!("Flushing remaining data {chunk_index} {}", chunk.data.len());
|
||||
app_handle.blobs().insert_chunk(&chunk)?;
|
||||
app_handle.db().upsert_http_response_event(
|
||||
&HttpResponseEvent::new(
|
||||
response_id,
|
||||
workspace_id,
|
||||
yaak_http::sender::HttpResponseEvent::ChunkSent { bytes: chunk.data.len() }.into(),
|
||||
),
|
||||
update_source,
|
||||
)?;
|
||||
}
|
||||
|
||||
// Update the response with the total request body size
|
||||
app_handle.with_tx(|tx| {
|
||||
debug!("Updating final body length {total_bytes}");
|
||||
if let Ok(mut response) = tx.get_http_response(&response_id) {
|
||||
response.request_content_length = Some(total_bytes as i32);
|
||||
tx.update_http_response_if_id(&response, update_source)?;
|
||||
}
|
||||
Ok(())
|
||||
})?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn apply_authentication<R: Runtime>(
|
||||
_window: &WebviewWindow<R>,
|
||||
sendable_request: &mut SendableHttpRequest,
|
||||
request: &HttpRequest,
|
||||
auth_context_id: String,
|
||||
plugin_manager: &PluginManager,
|
||||
plugin_context: &PluginContext,
|
||||
) -> Result<()> {
|
||||
match &request.authentication_type {
|
||||
None => {
|
||||
// No authentication found. Not even inherited
|
||||
}
|
||||
Some(authentication_type) if authentication_type == "none" => {
|
||||
// Explicitly no authentication
|
||||
}
|
||||
Some(authentication_type) => {
|
||||
let req = CallHttpAuthenticationRequest {
|
||||
context_id: format!("{:x}", md5::compute(auth_context_id)),
|
||||
values: serde_json::from_value(serde_json::to_value(&request.authentication)?)?,
|
||||
url: sendable_request.url.clone(),
|
||||
method: sendable_request.method.clone(),
|
||||
headers: sendable_request
|
||||
.headers
|
||||
.iter()
|
||||
.map(|(name, value)| HttpHeader {
|
||||
name: name.to_string(),
|
||||
value: value.to_string(),
|
||||
})
|
||||
.collect(),
|
||||
};
|
||||
let plugin_result = plugin_manager
|
||||
.call_http_authentication(plugin_context, &authentication_type, req)
|
||||
.await?;
|
||||
|
||||
for header in plugin_result.set_headers.unwrap_or_default() {
|
||||
sendable_request.insert_header((header.name, header.value));
|
||||
}
|
||||
|
||||
if let Some(params) = plugin_result.set_query_parameters {
|
||||
let params = params.into_iter().map(|p| (p.name, p.value)).collect::<Vec<_>>();
|
||||
sendable_request.url = append_query_params(&sendable_request.url, params);
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
@@ -1,359 +0,0 @@
|
||||
//! Tauri-specific plugin management code.
|
||||
//!
|
||||
//! This module contains all Tauri integration for the plugin system:
|
||||
//! - Plugin initialization and lifecycle management
|
||||
//! - Tauri commands for plugin search/install/uninstall
|
||||
//! - Plugin update checking
|
||||
|
||||
use crate::PluginContextExt;
|
||||
use crate::error::Result;
|
||||
use crate::models_ext::QueryManagerExt;
|
||||
use log::{error, info, warn};
|
||||
use serde::Serialize;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicBool, Ordering};
|
||||
use std::time::{Duration, Instant};
|
||||
use tauri::path::BaseDirectory;
|
||||
use tauri::plugin::{Builder, TauriPlugin};
|
||||
use tauri::{
|
||||
AppHandle, Emitter, Manager, RunEvent, Runtime, State, WebviewWindow, WindowEvent, command,
|
||||
is_dev,
|
||||
};
|
||||
use tokio::sync::Mutex;
|
||||
use ts_rs::TS;
|
||||
use yaak_models::models::Plugin;
|
||||
use yaak_models::util::UpdateSource;
|
||||
use yaak_plugins::api::{
|
||||
PluginNameVersion, PluginSearchResponse, PluginUpdatesResponse, check_plugin_updates,
|
||||
search_plugins,
|
||||
};
|
||||
use yaak_plugins::events::{Color, Icon, PluginContext, ShowToastRequest};
|
||||
use yaak_plugins::install::{delete_and_uninstall, download_and_install};
|
||||
use yaak_plugins::manager::PluginManager;
|
||||
use yaak_plugins::plugin_meta::get_plugin_meta;
|
||||
use yaak_tauri_utils::api_client::yaak_api_client;
|
||||
|
||||
static EXITING: AtomicBool = AtomicBool::new(false);
|
||||
|
||||
// ============================================================================
|
||||
// Plugin Updater
|
||||
// ============================================================================
|
||||
|
||||
const MAX_UPDATE_CHECK_HOURS: u64 = 12;
|
||||
|
||||
pub struct PluginUpdater {
|
||||
last_check: Option<Instant>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export, export_to = "index.ts")]
|
||||
pub struct PluginUpdateNotification {
|
||||
pub update_count: usize,
|
||||
pub plugins: Vec<PluginUpdateInfo>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export, export_to = "index.ts")]
|
||||
pub struct PluginUpdateInfo {
|
||||
pub name: String,
|
||||
pub current_version: String,
|
||||
pub latest_version: String,
|
||||
}
|
||||
|
||||
impl PluginUpdater {
|
||||
pub fn new() -> Self {
|
||||
Self { last_check: None }
|
||||
}
|
||||
|
||||
pub async fn check_now<R: Runtime>(&mut self, window: &WebviewWindow<R>) -> Result<bool> {
|
||||
self.last_check = Some(Instant::now());
|
||||
|
||||
info!("Checking for plugin updates");
|
||||
|
||||
let http_client = yaak_api_client(window.app_handle())?;
|
||||
let plugins = window.app_handle().db().list_plugins()?;
|
||||
let updates = check_plugin_updates(&http_client, plugins.clone()).await?;
|
||||
|
||||
if updates.plugins.is_empty() {
|
||||
info!("No plugin updates available");
|
||||
return Ok(false);
|
||||
}
|
||||
|
||||
// Get current plugin versions to build notification
|
||||
let mut update_infos = Vec::new();
|
||||
|
||||
for update in &updates.plugins {
|
||||
if let Some(plugin) = plugins.iter().find(|p| {
|
||||
if let Ok(meta) = get_plugin_meta(&std::path::Path::new(&p.directory)) {
|
||||
meta.name == update.name
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}) {
|
||||
if let Ok(meta) = get_plugin_meta(&std::path::Path::new(&plugin.directory)) {
|
||||
update_infos.push(PluginUpdateInfo {
|
||||
name: update.name.clone(),
|
||||
current_version: meta.version,
|
||||
latest_version: update.version.clone(),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let notification =
|
||||
PluginUpdateNotification { update_count: update_infos.len(), plugins: update_infos };
|
||||
|
||||
info!("Found {} plugin update(s)", notification.update_count);
|
||||
|
||||
if let Err(e) = window.emit_to(window.label(), "plugin_updates_available", ¬ification) {
|
||||
error!("Failed to emit plugin_updates_available event: {}", e);
|
||||
}
|
||||
|
||||
Ok(true)
|
||||
}
|
||||
|
||||
pub async fn maybe_check<R: Runtime>(&mut self, window: &WebviewWindow<R>) -> Result<bool> {
|
||||
let update_period_seconds = MAX_UPDATE_CHECK_HOURS * 60 * 60;
|
||||
|
||||
if let Some(i) = self.last_check
|
||||
&& i.elapsed().as_secs() < update_period_seconds
|
||||
{
|
||||
return Ok(false);
|
||||
}
|
||||
|
||||
self.check_now(window).await
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Tauri Commands
|
||||
// ============================================================================
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_plugins_search<R: Runtime>(
|
||||
app_handle: AppHandle<R>,
|
||||
query: &str,
|
||||
) -> Result<PluginSearchResponse> {
|
||||
let http_client = yaak_api_client(&app_handle)?;
|
||||
Ok(search_plugins(&http_client, query).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_plugins_install<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
name: &str,
|
||||
version: Option<String>,
|
||||
) -> Result<()> {
|
||||
let plugin_manager = Arc::new((*window.state::<PluginManager>()).clone());
|
||||
let http_client = yaak_api_client(window.app_handle())?;
|
||||
let query_manager = window.state::<yaak_models::query_manager::QueryManager>();
|
||||
let plugin_context = window.plugin_context();
|
||||
download_and_install(
|
||||
plugin_manager,
|
||||
&query_manager,
|
||||
&http_client,
|
||||
&plugin_context,
|
||||
name,
|
||||
version,
|
||||
)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_plugins_uninstall<R: Runtime>(
|
||||
plugin_id: &str,
|
||||
window: WebviewWindow<R>,
|
||||
) -> Result<Plugin> {
|
||||
let plugin_manager = Arc::new((*window.state::<PluginManager>()).clone());
|
||||
let query_manager = window.state::<yaak_models::query_manager::QueryManager>();
|
||||
let plugin_context = window.plugin_context();
|
||||
Ok(delete_and_uninstall(plugin_manager, &query_manager, &plugin_context, plugin_id).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_plugins_updates<R: Runtime>(
|
||||
app_handle: AppHandle<R>,
|
||||
) -> Result<PluginUpdatesResponse> {
|
||||
let http_client = yaak_api_client(&app_handle)?;
|
||||
let plugins = app_handle.db().list_plugins()?;
|
||||
Ok(check_plugin_updates(&http_client, plugins).await?)
|
||||
}
|
||||
|
||||
#[command]
|
||||
pub async fn cmd_plugins_update_all<R: Runtime>(
|
||||
window: WebviewWindow<R>,
|
||||
) -> Result<Vec<PluginNameVersion>> {
|
||||
let http_client = yaak_api_client(window.app_handle())?;
|
||||
let plugins = window.db().list_plugins()?;
|
||||
|
||||
// Get list of available updates (already filtered to only registry plugins)
|
||||
let updates = check_plugin_updates(&http_client, plugins).await?;
|
||||
|
||||
if updates.plugins.is_empty() {
|
||||
return Ok(Vec::new());
|
||||
}
|
||||
|
||||
let plugin_manager = Arc::new((*window.state::<PluginManager>()).clone());
|
||||
let query_manager = window.state::<yaak_models::query_manager::QueryManager>();
|
||||
let plugin_context = window.plugin_context();
|
||||
|
||||
let mut updated = Vec::new();
|
||||
|
||||
for update in updates.plugins {
|
||||
info!("Updating plugin: {} to version {}", update.name, update.version);
|
||||
match download_and_install(
|
||||
plugin_manager.clone(),
|
||||
&query_manager,
|
||||
&http_client,
|
||||
&plugin_context,
|
||||
&update.name,
|
||||
Some(update.version.clone()),
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(_) => {
|
||||
info!("Successfully updated plugin: {}", update.name);
|
||||
updated.push(update.clone());
|
||||
}
|
||||
Err(e) => {
|
||||
log::error!("Failed to update plugin {}: {:?}", update.name, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(updated)
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Tauri Plugin Initialization
|
||||
// ============================================================================
|
||||
|
||||
pub fn init<R: Runtime>() -> TauriPlugin<R> {
|
||||
Builder::new("yaak-plugins")
|
||||
.setup(|app_handle, _| {
|
||||
// Resolve paths for plugin manager
|
||||
let vendored_plugin_dir = app_handle
|
||||
.path()
|
||||
.resolve("vendored/plugins", BaseDirectory::Resource)
|
||||
.expect("failed to resolve plugin directory resource");
|
||||
|
||||
let installed_plugin_dir = app_handle
|
||||
.path()
|
||||
.app_data_dir()
|
||||
.expect("failed to get app data dir")
|
||||
.join("installed-plugins");
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
let node_bin_name = "yaaknode.exe";
|
||||
#[cfg(not(target_os = "windows"))]
|
||||
let node_bin_name = "yaaknode";
|
||||
|
||||
let node_bin_path = app_handle
|
||||
.path()
|
||||
.resolve(format!("vendored/node/{}", node_bin_name), BaseDirectory::Resource)
|
||||
.expect("failed to resolve yaaknode binary");
|
||||
|
||||
let plugin_runtime_main = app_handle
|
||||
.path()
|
||||
.resolve("vendored/plugin-runtime", BaseDirectory::Resource)
|
||||
.expect("failed to resolve plugin runtime")
|
||||
.join("index.cjs");
|
||||
|
||||
let dev_mode = is_dev();
|
||||
|
||||
// Create plugin manager asynchronously
|
||||
let app_handle_clone = app_handle.clone();
|
||||
tauri::async_runtime::block_on(async move {
|
||||
let manager = PluginManager::new(
|
||||
vendored_plugin_dir,
|
||||
installed_plugin_dir,
|
||||
node_bin_path,
|
||||
plugin_runtime_main,
|
||||
dev_mode,
|
||||
)
|
||||
.await;
|
||||
|
||||
// Initialize all plugins after manager is created
|
||||
let bundled_dirs = manager
|
||||
.list_bundled_plugin_dirs()
|
||||
.await
|
||||
.expect("Failed to list bundled plugins");
|
||||
|
||||
// Ensure all bundled plugins make it into the database
|
||||
let db = app_handle_clone.db();
|
||||
for dir in &bundled_dirs {
|
||||
if db.get_plugin_by_directory(dir).is_none() {
|
||||
db.upsert_plugin(
|
||||
&Plugin {
|
||||
directory: dir.clone(),
|
||||
enabled: true,
|
||||
url: None,
|
||||
..Default::default()
|
||||
},
|
||||
&UpdateSource::Background,
|
||||
)
|
||||
.expect("Failed to upsert bundled plugin");
|
||||
}
|
||||
}
|
||||
|
||||
// Get all plugins from database and initialize
|
||||
let plugins = db.list_plugins().expect("Failed to list plugins from database");
|
||||
drop(db); // Explicitly drop the connection before await
|
||||
|
||||
let errors =
|
||||
manager.initialize_all_plugins(plugins, &PluginContext::new_empty()).await;
|
||||
|
||||
// Show toast for any failed plugins
|
||||
for (plugin_dir, error_msg) in errors {
|
||||
let plugin_name = plugin_dir.split('/').last().unwrap_or(&plugin_dir);
|
||||
let toast = ShowToastRequest {
|
||||
message: format!("Failed to start plugin '{}': {}", plugin_name, error_msg),
|
||||
color: Some(Color::Danger),
|
||||
icon: Some(Icon::AlertTriangle),
|
||||
timeout: Some(10000),
|
||||
};
|
||||
if let Err(emit_err) = app_handle_clone.emit("show_toast", toast) {
|
||||
error!("Failed to emit toast for plugin error: {emit_err:?}");
|
||||
}
|
||||
}
|
||||
|
||||
app_handle_clone.manage(manager);
|
||||
});
|
||||
|
||||
let plugin_updater = PluginUpdater::new();
|
||||
app_handle.manage(Mutex::new(plugin_updater));
|
||||
|
||||
Ok(())
|
||||
})
|
||||
.on_event(|app, e| match e {
|
||||
RunEvent::ExitRequested { api, .. } => {
|
||||
if EXITING.swap(true, Ordering::SeqCst) {
|
||||
return; // Only exit once to prevent infinite recursion
|
||||
}
|
||||
api.prevent_exit();
|
||||
tauri::async_runtime::block_on(async move {
|
||||
info!("Exiting plugin runtime due to app exit");
|
||||
let manager: State<PluginManager> = app.state();
|
||||
manager.terminate().await;
|
||||
app.exit(0);
|
||||
});
|
||||
}
|
||||
RunEvent::WindowEvent { event: WindowEvent::Focused(true), label, .. } => {
|
||||
// Check for plugin updates on window focus
|
||||
let w = app.get_webview_window(&label).unwrap();
|
||||
let h = app.clone();
|
||||
tauri::async_runtime::spawn(async move {
|
||||
tokio::time::sleep(Duration::from_secs(3)).await;
|
||||
let val: State<'_, Mutex<PluginUpdater>> = h.state();
|
||||
if let Err(e) = val.lock().await.maybe_check(&w).await {
|
||||
warn!("Failed to check for plugin updates {e:?}");
|
||||
}
|
||||
});
|
||||
}
|
||||
_ => {}
|
||||
})
|
||||
.build()
|
||||
}
|
||||
@@ -1,18 +0,0 @@
|
||||
[package]
|
||||
name = "yaak-actions-builtin"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
authors = ["Gregory Schier"]
|
||||
publish = false
|
||||
|
||||
[dependencies]
|
||||
yaak-actions = { workspace = true }
|
||||
yaak-http = { workspace = true }
|
||||
yaak-models = { workspace = true }
|
||||
yaak-templates = { workspace = true }
|
||||
yaak-plugins = { workspace = true }
|
||||
yaak-crypto = { workspace = true }
|
||||
serde = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
tokio = { workspace = true, features = ["sync", "rt-multi-thread"] }
|
||||
log = { workspace = true }
|
||||
@@ -1,88 +0,0 @@
|
||||
//! Dependency injection for built-in actions.
|
||||
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::Arc;
|
||||
use yaak_crypto::manager::EncryptionManager;
|
||||
use yaak_models::query_manager::QueryManager;
|
||||
use yaak_plugins::events::PluginContext;
|
||||
use yaak_plugins::manager::PluginManager;
|
||||
|
||||
/// Dependencies needed by built-in action implementations.
|
||||
///
|
||||
/// This struct bundles all the dependencies that action handlers need,
|
||||
/// providing a clean way to initialize them in different contexts
|
||||
/// (CLI, Tauri app, MCP server, etc.).
|
||||
pub struct BuiltinActionDependencies {
|
||||
pub query_manager: Arc<QueryManager>,
|
||||
pub plugin_manager: Arc<PluginManager>,
|
||||
pub encryption_manager: Arc<EncryptionManager>,
|
||||
}
|
||||
|
||||
impl BuiltinActionDependencies {
|
||||
/// Create dependencies for standalone usage (CLI, MCP server, etc.)
|
||||
///
|
||||
/// This initializes all the necessary managers following the same pattern
|
||||
/// as the yaak-cli implementation.
|
||||
pub async fn new_standalone(
|
||||
db_path: &Path,
|
||||
blob_path: &Path,
|
||||
app_id: &str,
|
||||
plugin_vendored_dir: PathBuf,
|
||||
plugin_installed_dir: PathBuf,
|
||||
node_path: PathBuf,
|
||||
) -> Result<Self, Box<dyn std::error::Error>> {
|
||||
// Initialize database
|
||||
let (query_manager, _, _) = yaak_models::init_standalone(db_path, blob_path)?;
|
||||
|
||||
// Initialize encryption manager (takes QueryManager by value)
|
||||
let encryption_manager = Arc::new(EncryptionManager::new(
|
||||
query_manager.clone(),
|
||||
app_id.to_string(),
|
||||
));
|
||||
|
||||
let query_manager = Arc::new(query_manager);
|
||||
|
||||
// Find plugin runtime
|
||||
let plugin_runtime_main = std::env::var("YAAK_PLUGIN_RUNTIME")
|
||||
.map(PathBuf::from)
|
||||
.unwrap_or_else(|_| {
|
||||
// Development fallback
|
||||
PathBuf::from(env!("CARGO_MANIFEST_DIR"))
|
||||
.join("../../crates-tauri/yaak-app/vendored/plugin-runtime/index.cjs")
|
||||
});
|
||||
|
||||
// Initialize plugin manager
|
||||
let plugin_manager = Arc::new(
|
||||
PluginManager::new(
|
||||
plugin_vendored_dir,
|
||||
plugin_installed_dir,
|
||||
node_path,
|
||||
plugin_runtime_main,
|
||||
false, // not sandboxed in CLI
|
||||
)
|
||||
.await,
|
||||
);
|
||||
|
||||
// Initialize plugins from database
|
||||
let db = query_manager.connect();
|
||||
let plugins = db.list_plugins().unwrap_or_default();
|
||||
if !plugins.is_empty() {
|
||||
let errors = plugin_manager
|
||||
.initialize_all_plugins(plugins, &PluginContext::new_empty())
|
||||
.await;
|
||||
for (plugin_dir, error_msg) in errors {
|
||||
log::warn!(
|
||||
"Failed to initialize plugin '{}': {}",
|
||||
plugin_dir,
|
||||
error_msg
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Self {
|
||||
query_manager,
|
||||
plugin_manager,
|
||||
encryption_manager,
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,24 +0,0 @@
|
||||
//! HTTP action implementations.
|
||||
|
||||
pub mod send;
|
||||
|
||||
use crate::BuiltinActionDependencies;
|
||||
use yaak_actions::{ActionError, ActionExecutor, ActionSource};
|
||||
|
||||
/// Register all HTTP-related actions with the executor.
|
||||
pub async fn register_http_actions(
|
||||
executor: &ActionExecutor,
|
||||
deps: &BuiltinActionDependencies,
|
||||
) -> Result<(), ActionError> {
|
||||
let handler = send::HttpSendActionHandler {
|
||||
query_manager: deps.query_manager.clone(),
|
||||
plugin_manager: deps.plugin_manager.clone(),
|
||||
encryption_manager: deps.encryption_manager.clone(),
|
||||
};
|
||||
|
||||
executor
|
||||
.register(send::metadata(), ActionSource::Builtin, handler)
|
||||
.await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -1,293 +0,0 @@
|
||||
//! HTTP send action implementation.
|
||||
|
||||
use std::collections::BTreeMap;
|
||||
use std::sync::Arc;
|
||||
use serde_json::{json, Value};
|
||||
use tokio::sync::mpsc;
|
||||
use yaak_actions::{
|
||||
ActionError, ActionGroupId, ActionHandler, ActionId, ActionMetadata,
|
||||
ActionParams, ActionResult, ActionScope, CurrentContext,
|
||||
RequiredContext,
|
||||
};
|
||||
use yaak_crypto::manager::EncryptionManager;
|
||||
use yaak_http::path_placeholders::apply_path_placeholders;
|
||||
use yaak_http::sender::{HttpSender, ReqwestSender};
|
||||
use yaak_http::types::{SendableHttpRequest, SendableHttpRequestOptions};
|
||||
use yaak_models::models::{HttpRequest, HttpRequestHeader, HttpUrlParameter};
|
||||
use yaak_models::query_manager::QueryManager;
|
||||
use yaak_models::render::make_vars_hashmap;
|
||||
use yaak_plugins::events::{PluginContext, RenderPurpose};
|
||||
use yaak_plugins::manager::PluginManager;
|
||||
use yaak_plugins::template_callback::PluginTemplateCallback;
|
||||
use yaak_templates::{parse_and_render, render_json_value_raw, RenderOptions};
|
||||
|
||||
/// Handler for HTTP send action.
|
||||
pub struct HttpSendActionHandler {
|
||||
pub query_manager: Arc<QueryManager>,
|
||||
pub plugin_manager: Arc<PluginManager>,
|
||||
pub encryption_manager: Arc<EncryptionManager>,
|
||||
}
|
||||
|
||||
/// Metadata for the HTTP send action.
|
||||
pub fn metadata() -> ActionMetadata {
|
||||
ActionMetadata {
|
||||
id: ActionId::builtin("http", "send-request"),
|
||||
label: "Send HTTP Request".to_string(),
|
||||
description: Some("Execute an HTTP request and return the response".to_string()),
|
||||
icon: Some("play".to_string()),
|
||||
scope: ActionScope::HttpRequest,
|
||||
keyboard_shortcut: None,
|
||||
requires_selection: true,
|
||||
enabled_condition: None,
|
||||
group_id: Some(ActionGroupId::builtin("send")),
|
||||
order: 10,
|
||||
required_context: RequiredContext::requires_target(),
|
||||
}
|
||||
}
|
||||
|
||||
impl ActionHandler for HttpSendActionHandler {
|
||||
fn handle(
|
||||
&self,
|
||||
context: CurrentContext,
|
||||
params: ActionParams,
|
||||
) -> std::pin::Pin<
|
||||
Box<dyn std::future::Future<Output = Result<ActionResult, ActionError>> + Send + 'static>,
|
||||
> {
|
||||
let query_manager = self.query_manager.clone();
|
||||
let plugin_manager = self.plugin_manager.clone();
|
||||
let encryption_manager = self.encryption_manager.clone();
|
||||
|
||||
Box::pin(async move {
|
||||
// Extract request_id from context
|
||||
let request_id = context
|
||||
.target
|
||||
.as_ref()
|
||||
.ok_or_else(|| {
|
||||
ActionError::ContextMissing {
|
||||
missing_fields: vec!["target".to_string()],
|
||||
}
|
||||
})?
|
||||
.id()
|
||||
.ok_or_else(|| {
|
||||
ActionError::ContextMissing {
|
||||
missing_fields: vec!["target.id".to_string()],
|
||||
}
|
||||
})?
|
||||
.to_string();
|
||||
|
||||
// Fetch request and environment from database (synchronous)
|
||||
let (request, environment_chain) = {
|
||||
let db = query_manager.connect();
|
||||
|
||||
// Fetch HTTP request from database
|
||||
let request = db.get_http_request(&request_id).map_err(|e| {
|
||||
ActionError::Internal(format!("Failed to fetch request {}: {}", request_id, e))
|
||||
})?;
|
||||
|
||||
// Resolve environment chain for variable substitution
|
||||
let environment_chain = if let Some(env_id) = &context.environment_id {
|
||||
db.resolve_environments(
|
||||
&request.workspace_id,
|
||||
request.folder_id.as_deref(),
|
||||
Some(env_id),
|
||||
)
|
||||
.unwrap_or_default()
|
||||
} else {
|
||||
db.resolve_environments(
|
||||
&request.workspace_id,
|
||||
request.folder_id.as_deref(),
|
||||
None,
|
||||
)
|
||||
.unwrap_or_default()
|
||||
};
|
||||
|
||||
(request, environment_chain)
|
||||
}; // db is dropped here
|
||||
|
||||
// Create template callback with plugin support
|
||||
let plugin_context = PluginContext::new(None, Some(request.workspace_id.clone()));
|
||||
let template_callback = PluginTemplateCallback::new(
|
||||
plugin_manager,
|
||||
encryption_manager,
|
||||
&plugin_context,
|
||||
RenderPurpose::Send,
|
||||
);
|
||||
|
||||
// Render templates in the request
|
||||
let rendered_request = render_http_request(
|
||||
&request,
|
||||
environment_chain,
|
||||
&template_callback,
|
||||
&RenderOptions::throw(),
|
||||
)
|
||||
.await
|
||||
.map_err(|e| ActionError::Internal(format!("Failed to render request: {}", e)))?;
|
||||
|
||||
// Build sendable request
|
||||
let options = SendableHttpRequestOptions {
|
||||
timeout: params
|
||||
.data
|
||||
.get("timeout_ms")
|
||||
.and_then(|v| v.as_u64())
|
||||
.map(|ms| std::time::Duration::from_millis(ms)),
|
||||
follow_redirects: params
|
||||
.data
|
||||
.get("follow_redirects")
|
||||
.and_then(|v| v.as_bool())
|
||||
.unwrap_or(false),
|
||||
};
|
||||
|
||||
let sendable = SendableHttpRequest::from_http_request(&rendered_request, options)
|
||||
.await
|
||||
.map_err(|e| ActionError::Internal(format!("Failed to build request: {}", e)))?;
|
||||
|
||||
// Create event channel
|
||||
let (event_tx, mut event_rx) = mpsc::channel(100);
|
||||
|
||||
// Spawn task to drain events
|
||||
let _event_handle = tokio::spawn(async move {
|
||||
while event_rx.recv().await.is_some() {
|
||||
// For now, just drain events
|
||||
// In the future, we could log them or emit them to UI
|
||||
}
|
||||
});
|
||||
|
||||
// Send the request
|
||||
let sender = ReqwestSender::new()
|
||||
.map_err(|e| ActionError::Internal(format!("Failed to create HTTP client: {}", e)))?;
|
||||
let response = sender
|
||||
.send(sendable, event_tx)
|
||||
.await
|
||||
.map_err(|e| ActionError::Internal(format!("Failed to send request: {}", e)))?;
|
||||
|
||||
// Consume response body
|
||||
let status = response.status;
|
||||
let status_reason = response.status_reason.clone();
|
||||
let headers = response.headers.clone();
|
||||
let url = response.url.clone();
|
||||
|
||||
let (body_text, stats) = response
|
||||
.text()
|
||||
.await
|
||||
.map_err(|e| ActionError::Internal(format!("Failed to read response body: {}", e)))?;
|
||||
|
||||
// Return success result with response data
|
||||
Ok(ActionResult::Success {
|
||||
data: Some(json!({
|
||||
"status": status,
|
||||
"statusReason": status_reason,
|
||||
"headers": headers,
|
||||
"body": body_text,
|
||||
"contentLength": stats.size_decompressed,
|
||||
"url": url,
|
||||
})),
|
||||
message: Some(format!("HTTP {}", status)),
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Helper function to render templates in an HTTP request.
|
||||
/// Copied from yaak-cli implementation.
|
||||
async fn render_http_request(
|
||||
r: &HttpRequest,
|
||||
environment_chain: Vec<yaak_models::models::Environment>,
|
||||
cb: &PluginTemplateCallback,
|
||||
opt: &RenderOptions,
|
||||
) -> Result<HttpRequest, String> {
|
||||
let vars = &make_vars_hashmap(environment_chain);
|
||||
|
||||
let mut url_parameters = Vec::new();
|
||||
for p in r.url_parameters.clone() {
|
||||
if !p.enabled {
|
||||
continue;
|
||||
}
|
||||
url_parameters.push(HttpUrlParameter {
|
||||
enabled: p.enabled,
|
||||
name: parse_and_render(p.name.as_str(), vars, cb, opt)
|
||||
.await
|
||||
.map_err(|e| e.to_string())?,
|
||||
value: parse_and_render(p.value.as_str(), vars, cb, opt)
|
||||
.await
|
||||
.map_err(|e| e.to_string())?,
|
||||
id: p.id,
|
||||
})
|
||||
}
|
||||
|
||||
let mut headers = Vec::new();
|
||||
for p in r.headers.clone() {
|
||||
if !p.enabled {
|
||||
continue;
|
||||
}
|
||||
headers.push(HttpRequestHeader {
|
||||
enabled: p.enabled,
|
||||
name: parse_and_render(p.name.as_str(), vars, cb, opt)
|
||||
.await
|
||||
.map_err(|e| e.to_string())?,
|
||||
value: parse_and_render(p.value.as_str(), vars, cb, opt)
|
||||
.await
|
||||
.map_err(|e| e.to_string())?,
|
||||
id: p.id,
|
||||
})
|
||||
}
|
||||
|
||||
let mut body = BTreeMap::new();
|
||||
for (k, v) in r.body.clone() {
|
||||
body.insert(
|
||||
k,
|
||||
render_json_value_raw(v, vars, cb, opt)
|
||||
.await
|
||||
.map_err(|e| e.to_string())?,
|
||||
);
|
||||
}
|
||||
|
||||
let authentication = {
|
||||
let mut disabled = false;
|
||||
let mut auth = BTreeMap::new();
|
||||
match r.authentication.get("disabled") {
|
||||
Some(Value::Bool(true)) => {
|
||||
disabled = true;
|
||||
}
|
||||
Some(Value::String(tmpl)) => {
|
||||
disabled = parse_and_render(tmpl.as_str(), vars, cb, opt)
|
||||
.await
|
||||
.unwrap_or_default()
|
||||
.is_empty();
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
if disabled {
|
||||
auth.insert("disabled".to_string(), Value::Bool(true));
|
||||
} else {
|
||||
for (k, v) in r.authentication.clone() {
|
||||
if k == "disabled" {
|
||||
auth.insert(k, Value::Bool(false));
|
||||
} else {
|
||||
auth.insert(
|
||||
k,
|
||||
render_json_value_raw(v, vars, cb, opt)
|
||||
.await
|
||||
.map_err(|e| e.to_string())?,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
auth
|
||||
};
|
||||
|
||||
let url = parse_and_render(r.url.clone().as_str(), vars, cb, opt)
|
||||
.await
|
||||
.map_err(|e| e.to_string())?;
|
||||
|
||||
// Apply path placeholders (e.g., /users/:id -> /users/123)
|
||||
let (url, url_parameters) = apply_path_placeholders(&url, &url_parameters);
|
||||
|
||||
Ok(HttpRequest {
|
||||
url,
|
||||
url_parameters,
|
||||
headers,
|
||||
body,
|
||||
authentication,
|
||||
..r.to_owned()
|
||||
})
|
||||
}
|
||||
@@ -1,11 +0,0 @@
|
||||
//! Built-in action implementations for Yaak.
|
||||
//!
|
||||
//! This crate provides concrete implementations of built-in actions using
|
||||
//! the yaak-actions framework. It depends on domain-specific crates like
|
||||
//! yaak-http, yaak-models, yaak-plugins, etc.
|
||||
|
||||
pub mod dependencies;
|
||||
pub mod http;
|
||||
|
||||
pub use dependencies::BuiltinActionDependencies;
|
||||
pub use http::register_http_actions;
|
||||
@@ -1,15 +0,0 @@
|
||||
[package]
|
||||
name = "yaak-actions"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
description = "Centralized action system for Yaak"
|
||||
|
||||
[dependencies]
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
tokio = { workspace = true, features = ["sync"] }
|
||||
ts-rs = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
|
||||
@@ -1,14 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* Availability status for an action.
|
||||
*/
|
||||
export type ActionAvailability = { "status": "available" } | { "status": "available-with-prompt",
|
||||
/**
|
||||
* Fields that will require prompting.
|
||||
*/
|
||||
prompt_fields: Array<string>, } | { "status": "unavailable",
|
||||
/**
|
||||
* Fields that are missing.
|
||||
*/
|
||||
missing_fields: Array<string>, } | { "status": "not-found" };
|
||||
@@ -1,13 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
import type { ActionGroupId } from "./ActionGroupId";
|
||||
import type { ActionId } from "./ActionId";
|
||||
import type { ActionScope } from "./ActionScope";
|
||||
|
||||
/**
|
||||
* Errors that can occur during action operations.
|
||||
*/
|
||||
export type ActionError = { "type": "not-found" } & ActionId | { "type": "disabled", action_id: ActionId, reason: string, } | { "type": "invalid-scope", expected: ActionScope, actual: ActionScope, } | { "type": "timeout" } & ActionId | { "type": "plugin-error" } & string | { "type": "validation-error" } & string | { "type": "permission-denied" } & string | { "type": "cancelled" } | { "type": "internal" } & string | { "type": "context-missing",
|
||||
/**
|
||||
* The context fields that are missing.
|
||||
*/
|
||||
missing_fields: Array<string>, } | { "type": "group-not-found" } & ActionGroupId | { "type": "group-already-exists" } & ActionGroupId;
|
||||
@@ -1,10 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* Unique identifier for an action group.
|
||||
*
|
||||
* Format: `namespace:group-name`
|
||||
* - Built-in: `yaak:export`
|
||||
* - Plugin: `plugin.my-plugin:utilities`
|
||||
*/
|
||||
export type ActionGroupId = string;
|
||||
@@ -1,32 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
import type { ActionGroupId } from "./ActionGroupId";
|
||||
import type { ActionScope } from "./ActionScope";
|
||||
|
||||
/**
|
||||
* Metadata about an action group.
|
||||
*/
|
||||
export type ActionGroupMetadata = {
|
||||
/**
|
||||
* Unique identifier for this group.
|
||||
*/
|
||||
id: ActionGroupId,
|
||||
/**
|
||||
* Display name for the group.
|
||||
*/
|
||||
name: string,
|
||||
/**
|
||||
* Optional description of the group's purpose.
|
||||
*/
|
||||
description: string | null,
|
||||
/**
|
||||
* Icon to display for the group.
|
||||
*/
|
||||
icon: string | null,
|
||||
/**
|
||||
* Sort order for displaying groups (lower = earlier).
|
||||
*/
|
||||
order: number,
|
||||
/**
|
||||
* Optional scope restriction (if set, group only appears in this scope).
|
||||
*/
|
||||
scope: ActionScope | null, };
|
||||
@@ -1,18 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* Where an action group was registered from.
|
||||
*/
|
||||
export type ActionGroupSource = { "type": "builtin" } | { "type": "plugin",
|
||||
/**
|
||||
* Plugin reference ID.
|
||||
*/
|
||||
ref_id: string,
|
||||
/**
|
||||
* Plugin name.
|
||||
*/
|
||||
name: string, } | { "type": "dynamic",
|
||||
/**
|
||||
* Source identifier.
|
||||
*/
|
||||
source_id: string, };
|
||||
@@ -1,16 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
import type { ActionGroupMetadata } from "./ActionGroupMetadata";
|
||||
import type { ActionMetadata } from "./ActionMetadata";
|
||||
|
||||
/**
|
||||
* A group with its actions for UI rendering.
|
||||
*/
|
||||
export type ActionGroupWithActions = {
|
||||
/**
|
||||
* Group metadata.
|
||||
*/
|
||||
group: ActionGroupMetadata,
|
||||
/**
|
||||
* Actions in this group.
|
||||
*/
|
||||
actions: Array<ActionMetadata>, };
|
||||
@@ -1,10 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* Unique identifier for an action.
|
||||
*
|
||||
* Format: `namespace:category:name`
|
||||
* - Built-in: `yaak:http-request:send`
|
||||
* - Plugin: `plugin.copy-curl:http-request:copy`
|
||||
*/
|
||||
export type ActionId = string;
|
||||
@@ -1,54 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
import type { ActionGroupId } from "./ActionGroupId";
|
||||
import type { ActionId } from "./ActionId";
|
||||
import type { ActionScope } from "./ActionScope";
|
||||
import type { RequiredContext } from "./RequiredContext";
|
||||
|
||||
/**
|
||||
* Metadata about an action for discovery.
|
||||
*/
|
||||
export type ActionMetadata = {
|
||||
/**
|
||||
* Unique identifier for this action.
|
||||
*/
|
||||
id: ActionId,
|
||||
/**
|
||||
* Display label for the action.
|
||||
*/
|
||||
label: string,
|
||||
/**
|
||||
* Optional description of what the action does.
|
||||
*/
|
||||
description: string | null,
|
||||
/**
|
||||
* Icon name to display.
|
||||
*/
|
||||
icon: string | null,
|
||||
/**
|
||||
* The scope this action applies to.
|
||||
*/
|
||||
scope: ActionScope,
|
||||
/**
|
||||
* Keyboard shortcut (e.g., "Cmd+Enter").
|
||||
*/
|
||||
keyboardShortcut: string | null,
|
||||
/**
|
||||
* Whether the action requires a selection/target.
|
||||
*/
|
||||
requiresSelection: boolean,
|
||||
/**
|
||||
* Optional condition expression for when action is enabled.
|
||||
*/
|
||||
enabledCondition: string | null,
|
||||
/**
|
||||
* Optional group this action belongs to.
|
||||
*/
|
||||
groupId: ActionGroupId | null,
|
||||
/**
|
||||
* Sort order within a group (lower = earlier).
|
||||
*/
|
||||
order: number,
|
||||
/**
|
||||
* Context requirements for this action.
|
||||
*/
|
||||
requiredContext: RequiredContext, };
|
||||
@@ -1,10 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* Parameters passed to action handlers.
|
||||
*/
|
||||
export type ActionParams = {
|
||||
/**
|
||||
* Arbitrary JSON parameters.
|
||||
*/
|
||||
data: unknown, };
|
||||
@@ -1,23 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
import type { InputPrompt } from "./InputPrompt";
|
||||
|
||||
/**
|
||||
* Result of action execution.
|
||||
*/
|
||||
export type ActionResult = { "type": "success",
|
||||
/**
|
||||
* Optional data to return.
|
||||
*/
|
||||
data: unknown,
|
||||
/**
|
||||
* Optional message to display.
|
||||
*/
|
||||
message: string | null, } | { "type": "requires-input",
|
||||
/**
|
||||
* Prompt to show user.
|
||||
*/
|
||||
prompt: InputPrompt,
|
||||
/**
|
||||
* Continuation token.
|
||||
*/
|
||||
continuation_id: string, } | { "type": "cancelled" };
|
||||
@@ -1,6 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* The scope in which an action can be invoked.
|
||||
*/
|
||||
export type ActionScope = "global" | "http-request" | "websocket-request" | "grpc-request" | "workspace" | "folder" | "environment" | "cookie-jar";
|
||||
@@ -1,18 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* Where an action was registered from.
|
||||
*/
|
||||
export type ActionSource = { "type": "builtin" } | { "type": "plugin",
|
||||
/**
|
||||
* Plugin reference ID.
|
||||
*/
|
||||
ref_id: string,
|
||||
/**
|
||||
* Plugin name.
|
||||
*/
|
||||
name: string, } | { "type": "dynamic",
|
||||
/**
|
||||
* Source identifier.
|
||||
*/
|
||||
source_id: string, };
|
||||
@@ -1,6 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* The target entity for an action.
|
||||
*/
|
||||
export type ActionTarget = { "type": "none" } | { "type": "http-request", id: string, } | { "type": "websocket-request", id: string, } | { "type": "grpc-request", id: string, } | { "type": "workspace", id: string, } | { "type": "folder", id: string, } | { "type": "environment", id: string, } | { "type": "multiple", targets: Array<ActionTarget>, };
|
||||
@@ -1,6 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* How strictly a context field is required.
|
||||
*/
|
||||
export type ContextRequirement = "not-required" | "optional" | "required" | "required-with-prompt";
|
||||
@@ -1,27 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
import type { ActionTarget } from "./ActionTarget";
|
||||
|
||||
/**
|
||||
* Current context state from the application.
|
||||
*/
|
||||
export type CurrentContext = {
|
||||
/**
|
||||
* Current workspace ID (if any).
|
||||
*/
|
||||
workspaceId: string | null,
|
||||
/**
|
||||
* Current environment ID (if any).
|
||||
*/
|
||||
environmentId: string | null,
|
||||
/**
|
||||
* Currently selected target (if any).
|
||||
*/
|
||||
target: ActionTarget | null,
|
||||
/**
|
||||
* Whether a window context is available.
|
||||
*/
|
||||
hasWindow: boolean,
|
||||
/**
|
||||
* Whether the context provider can prompt for missing fields.
|
||||
*/
|
||||
canPrompt: boolean, };
|
||||
@@ -1,7 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
import type { SelectOption } from "./SelectOption";
|
||||
|
||||
/**
|
||||
* A prompt for user input.
|
||||
*/
|
||||
export type InputPrompt = { "type": "text", label: string, placeholder: string | null, default_value: string | null, } | { "type": "select", label: string, options: Array<SelectOption>, } | { "type": "confirm", label: string, };
|
||||
@@ -1,23 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
import type { ContextRequirement } from "./ContextRequirement";
|
||||
|
||||
/**
|
||||
* Specifies what context fields an action requires.
|
||||
*/
|
||||
export type RequiredContext = {
|
||||
/**
|
||||
* Action requires a workspace to be active.
|
||||
*/
|
||||
workspace: ContextRequirement,
|
||||
/**
|
||||
* Action requires an environment to be selected.
|
||||
*/
|
||||
environment: ContextRequirement,
|
||||
/**
|
||||
* Action requires a specific target entity (request, folder, etc.).
|
||||
*/
|
||||
target: ContextRequirement,
|
||||
/**
|
||||
* Action requires a window context (for UI operations).
|
||||
*/
|
||||
window: ContextRequirement, };
|
||||
@@ -1,6 +0,0 @@
|
||||
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
|
||||
|
||||
/**
|
||||
* An option in a select prompt.
|
||||
*/
|
||||
export type SelectOption = { label: string, value: string, };
|
||||
@@ -1,331 +0,0 @@
|
||||
//! Action context types and context-aware filtering.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::ActionScope;
|
||||
|
||||
/// Specifies what context fields an action requires.
|
||||
#[derive(Clone, Debug, Default, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct RequiredContext {
|
||||
/// Action requires a workspace to be active.
|
||||
#[serde(default)]
|
||||
pub workspace: ContextRequirement,
|
||||
|
||||
/// Action requires an environment to be selected.
|
||||
#[serde(default)]
|
||||
pub environment: ContextRequirement,
|
||||
|
||||
/// Action requires a specific target entity (request, folder, etc.).
|
||||
#[serde(default)]
|
||||
pub target: ContextRequirement,
|
||||
|
||||
/// Action requires a window context (for UI operations).
|
||||
#[serde(default)]
|
||||
pub window: ContextRequirement,
|
||||
}
|
||||
|
||||
impl RequiredContext {
|
||||
/// Action requires a target entity.
|
||||
pub fn requires_target() -> Self {
|
||||
Self {
|
||||
target: ContextRequirement::Required,
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
|
||||
/// Action requires workspace and target.
|
||||
pub fn requires_workspace_and_target() -> Self {
|
||||
Self {
|
||||
workspace: ContextRequirement::Required,
|
||||
target: ContextRequirement::Required,
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
|
||||
/// Action works globally, no specific context needed.
|
||||
pub fn global() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Action requires target with prompt if missing.
|
||||
pub fn requires_target_with_prompt() -> Self {
|
||||
Self {
|
||||
target: ContextRequirement::RequiredWithPrompt,
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
|
||||
/// Action requires environment with prompt if missing.
|
||||
pub fn requires_environment_with_prompt() -> Self {
|
||||
Self {
|
||||
environment: ContextRequirement::RequiredWithPrompt,
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// How strictly a context field is required.
|
||||
#[derive(Clone, Debug, Default, PartialEq, Eq, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
pub enum ContextRequirement {
|
||||
/// Field is not needed.
|
||||
#[default]
|
||||
NotRequired,
|
||||
|
||||
/// Field is optional but will be used if available.
|
||||
Optional,
|
||||
|
||||
/// Field must be present; action will fail without it.
|
||||
Required,
|
||||
|
||||
/// Field must be present; prompt user to select if missing.
|
||||
RequiredWithPrompt,
|
||||
}
|
||||
|
||||
/// Current context state from the application.
|
||||
#[derive(Clone, Debug, Default, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct CurrentContext {
|
||||
/// Current workspace ID (if any).
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub workspace_id: Option<String>,
|
||||
|
||||
/// Current environment ID (if any).
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub environment_id: Option<String>,
|
||||
|
||||
/// Currently selected target (if any).
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub target: Option<ActionTarget>,
|
||||
|
||||
/// Whether a window context is available.
|
||||
#[serde(default)]
|
||||
pub has_window: bool,
|
||||
|
||||
/// Whether the context provider can prompt for missing fields.
|
||||
#[serde(default)]
|
||||
pub can_prompt: bool,
|
||||
}
|
||||
|
||||
/// The target entity for an action.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(tag = "type", rename_all = "kebab-case")]
|
||||
pub enum ActionTarget {
|
||||
/// No target.
|
||||
None,
|
||||
/// HTTP request target.
|
||||
HttpRequest { id: String },
|
||||
/// WebSocket request target.
|
||||
WebsocketRequest { id: String },
|
||||
/// gRPC request target.
|
||||
GrpcRequest { id: String },
|
||||
/// Workspace target.
|
||||
Workspace { id: String },
|
||||
/// Folder target.
|
||||
Folder { id: String },
|
||||
/// Environment target.
|
||||
Environment { id: String },
|
||||
/// Multiple targets.
|
||||
Multiple { targets: Vec<ActionTarget> },
|
||||
}
|
||||
|
||||
impl ActionTarget {
|
||||
/// Get the scope this target corresponds to.
|
||||
pub fn scope(&self) -> Option<ActionScope> {
|
||||
match self {
|
||||
Self::None => None,
|
||||
Self::HttpRequest { .. } => Some(ActionScope::HttpRequest),
|
||||
Self::WebsocketRequest { .. } => Some(ActionScope::WebsocketRequest),
|
||||
Self::GrpcRequest { .. } => Some(ActionScope::GrpcRequest),
|
||||
Self::Workspace { .. } => Some(ActionScope::Workspace),
|
||||
Self::Folder { .. } => Some(ActionScope::Folder),
|
||||
Self::Environment { .. } => Some(ActionScope::Environment),
|
||||
Self::Multiple { .. } => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the ID of the target (if single target).
|
||||
pub fn id(&self) -> Option<&str> {
|
||||
match self {
|
||||
Self::HttpRequest { id }
|
||||
| Self::WebsocketRequest { id }
|
||||
| Self::GrpcRequest { id }
|
||||
| Self::Workspace { id }
|
||||
| Self::Folder { id }
|
||||
| Self::Environment { id } => Some(id),
|
||||
Self::None | Self::Multiple { .. } => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Availability status for an action.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(tag = "status", rename_all = "kebab-case")]
|
||||
pub enum ActionAvailability {
|
||||
/// Action is ready to execute.
|
||||
Available,
|
||||
|
||||
/// Action can execute but will prompt for missing context.
|
||||
AvailableWithPrompt {
|
||||
/// Fields that will require prompting.
|
||||
prompt_fields: Vec<String>,
|
||||
},
|
||||
|
||||
/// Action cannot execute due to missing context.
|
||||
Unavailable {
|
||||
/// Fields that are missing.
|
||||
missing_fields: Vec<String>,
|
||||
},
|
||||
|
||||
/// Action not found in registry.
|
||||
NotFound,
|
||||
}
|
||||
|
||||
impl ActionAvailability {
|
||||
/// Check if the action is available (possibly with prompts).
|
||||
pub fn is_available(&self) -> bool {
|
||||
matches!(self, Self::Available | Self::AvailableWithPrompt { .. })
|
||||
}
|
||||
|
||||
/// Check if the action is immediately available without prompts.
|
||||
pub fn is_immediately_available(&self) -> bool {
|
||||
matches!(self, Self::Available)
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if required context is satisfied by current context.
|
||||
pub fn check_context_availability(
|
||||
required: &RequiredContext,
|
||||
current: &CurrentContext,
|
||||
) -> ActionAvailability {
|
||||
let mut missing_fields = Vec::new();
|
||||
let mut prompt_fields = Vec::new();
|
||||
|
||||
// Check workspace
|
||||
check_field(
|
||||
"workspace",
|
||||
current.workspace_id.is_some(),
|
||||
&required.workspace,
|
||||
current.can_prompt,
|
||||
&mut missing_fields,
|
||||
&mut prompt_fields,
|
||||
);
|
||||
|
||||
// Check environment
|
||||
check_field(
|
||||
"environment",
|
||||
current.environment_id.is_some(),
|
||||
&required.environment,
|
||||
current.can_prompt,
|
||||
&mut missing_fields,
|
||||
&mut prompt_fields,
|
||||
);
|
||||
|
||||
// Check target
|
||||
check_field(
|
||||
"target",
|
||||
current.target.is_some(),
|
||||
&required.target,
|
||||
current.can_prompt,
|
||||
&mut missing_fields,
|
||||
&mut prompt_fields,
|
||||
);
|
||||
|
||||
// Check window
|
||||
check_field(
|
||||
"window",
|
||||
current.has_window,
|
||||
&required.window,
|
||||
false, // Can't prompt for window
|
||||
&mut missing_fields,
|
||||
&mut prompt_fields,
|
||||
);
|
||||
|
||||
if !missing_fields.is_empty() {
|
||||
ActionAvailability::Unavailable { missing_fields }
|
||||
} else if !prompt_fields.is_empty() {
|
||||
ActionAvailability::AvailableWithPrompt { prompt_fields }
|
||||
} else {
|
||||
ActionAvailability::Available
|
||||
}
|
||||
}
|
||||
|
||||
fn check_field(
|
||||
name: &str,
|
||||
has_value: bool,
|
||||
requirement: &ContextRequirement,
|
||||
can_prompt: bool,
|
||||
missing: &mut Vec<String>,
|
||||
promptable: &mut Vec<String>,
|
||||
) {
|
||||
match requirement {
|
||||
ContextRequirement::NotRequired | ContextRequirement::Optional => {}
|
||||
ContextRequirement::Required => {
|
||||
if !has_value {
|
||||
missing.push(name.to_string());
|
||||
}
|
||||
}
|
||||
ContextRequirement::RequiredWithPrompt => {
|
||||
if !has_value {
|
||||
if can_prompt {
|
||||
promptable.push(name.to_string());
|
||||
} else {
|
||||
missing.push(name.to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_context_available() {
|
||||
let required = RequiredContext::requires_target();
|
||||
let current = CurrentContext {
|
||||
target: Some(ActionTarget::HttpRequest {
|
||||
id: "123".to_string(),
|
||||
}),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let availability = check_context_availability(&required, ¤t);
|
||||
assert!(matches!(availability, ActionAvailability::Available));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_context_missing() {
|
||||
let required = RequiredContext::requires_target();
|
||||
let current = CurrentContext::default();
|
||||
|
||||
let availability = check_context_availability(&required, ¤t);
|
||||
assert!(matches!(
|
||||
availability,
|
||||
ActionAvailability::Unavailable { missing_fields } if missing_fields == vec!["target"]
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_context_promptable() {
|
||||
let required = RequiredContext::requires_target_with_prompt();
|
||||
let current = CurrentContext {
|
||||
can_prompt: true,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let availability = check_context_availability(&required, ¤t);
|
||||
assert!(matches!(
|
||||
availability,
|
||||
ActionAvailability::AvailableWithPrompt { prompt_fields } if prompt_fields == vec!["target"]
|
||||
));
|
||||
}
|
||||
}
|
||||
@@ -1,131 +0,0 @@
|
||||
//! Error types for the action system.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use thiserror::Error;
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::{ActionGroupId, ActionId};
|
||||
|
||||
/// Errors that can occur during action operations.
|
||||
#[derive(Debug, Error, Clone, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(tag = "type", rename_all = "kebab-case")]
|
||||
pub enum ActionError {
|
||||
/// Action not found in registry.
|
||||
#[error("Action not found: {0}")]
|
||||
NotFound(ActionId),
|
||||
|
||||
/// Action is disabled in current context.
|
||||
#[error("Action is disabled: {action_id} - {reason}")]
|
||||
Disabled { action_id: ActionId, reason: String },
|
||||
|
||||
/// Invalid scope for the action.
|
||||
#[error("Invalid scope: expected {expected:?}, got {actual:?}")]
|
||||
InvalidScope {
|
||||
expected: crate::ActionScope,
|
||||
actual: crate::ActionScope,
|
||||
},
|
||||
|
||||
/// Action execution timed out.
|
||||
#[error("Action timed out: {0}")]
|
||||
Timeout(ActionId),
|
||||
|
||||
/// Error from plugin execution.
|
||||
#[error("Plugin error: {0}")]
|
||||
PluginError(String),
|
||||
|
||||
/// Validation error in action parameters.
|
||||
#[error("Validation error: {0}")]
|
||||
ValidationError(String),
|
||||
|
||||
/// Permission denied for action.
|
||||
#[error("Permission denied: {0}")]
|
||||
PermissionDenied(String),
|
||||
|
||||
/// Action was cancelled by user.
|
||||
#[error("Action cancelled by user")]
|
||||
Cancelled,
|
||||
|
||||
/// Internal error.
|
||||
#[error("Internal error: {0}")]
|
||||
Internal(String),
|
||||
|
||||
/// Required context is missing.
|
||||
#[error("Required context missing: {missing_fields:?}")]
|
||||
ContextMissing {
|
||||
/// The context fields that are missing.
|
||||
missing_fields: Vec<String>,
|
||||
},
|
||||
|
||||
/// Action group not found.
|
||||
#[error("Group not found: {0}")]
|
||||
GroupNotFound(ActionGroupId),
|
||||
|
||||
/// Action group already exists.
|
||||
#[error("Group already exists: {0}")]
|
||||
GroupAlreadyExists(ActionGroupId),
|
||||
}
|
||||
|
||||
impl ActionError {
|
||||
/// Get a user-friendly error message.
|
||||
pub fn user_message(&self) -> String {
|
||||
match self {
|
||||
Self::NotFound(id) => format!("Action '{}' is not available", id),
|
||||
Self::Disabled { reason, .. } => reason.clone(),
|
||||
Self::InvalidScope { expected, actual } => {
|
||||
format!("Action requires {:?} scope, but got {:?}", expected, actual)
|
||||
}
|
||||
Self::Timeout(_) => "The operation took too long and was cancelled".into(),
|
||||
Self::PluginError(msg) => format!("Plugin error: {}", msg),
|
||||
Self::ValidationError(msg) => format!("Invalid input: {}", msg),
|
||||
Self::PermissionDenied(resource) => format!("Permission denied for {}", resource),
|
||||
Self::Cancelled => "Operation was cancelled".into(),
|
||||
Self::Internal(_) => "An unexpected error occurred".into(),
|
||||
Self::ContextMissing { missing_fields } => {
|
||||
format!("Missing required context: {}", missing_fields.join(", "))
|
||||
}
|
||||
Self::GroupNotFound(id) => format!("Action group '{}' not found", id),
|
||||
Self::GroupAlreadyExists(id) => format!("Action group '{}' already exists", id),
|
||||
}
|
||||
}
|
||||
|
||||
/// Whether this error should be reported to telemetry.
|
||||
pub fn is_reportable(&self) -> bool {
|
||||
matches!(self, Self::Internal(_) | Self::PluginError(_))
|
||||
}
|
||||
|
||||
/// Whether this error can potentially be resolved by user interaction.
|
||||
pub fn is_promptable(&self) -> bool {
|
||||
matches!(self, Self::ContextMissing { .. })
|
||||
}
|
||||
|
||||
/// Whether this is a user-initiated cancellation.
|
||||
pub fn is_cancelled(&self) -> bool {
|
||||
matches!(self, Self::Cancelled)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_error_messages() {
|
||||
let err = ActionError::ContextMissing {
|
||||
missing_fields: vec!["workspace".into()],
|
||||
};
|
||||
assert_eq!(err.user_message(), "Missing required context: workspace");
|
||||
assert!(err.is_promptable());
|
||||
assert!(!err.is_cancelled());
|
||||
|
||||
let cancelled = ActionError::Cancelled;
|
||||
assert!(cancelled.is_cancelled());
|
||||
assert!(!cancelled.is_promptable());
|
||||
|
||||
let not_found = ActionError::NotFound(ActionId::builtin("test", "action"));
|
||||
assert_eq!(
|
||||
not_found.user_message(),
|
||||
"Action 'yaak:test:action' is not available"
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -1,606 +0,0 @@
|
||||
//! Action executor - central hub for action registration and invocation.
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::RwLock;
|
||||
|
||||
use crate::{
|
||||
check_context_availability, ActionAvailability, ActionError, ActionGroupId,
|
||||
ActionGroupMetadata, ActionGroupSource, ActionGroupWithActions, ActionHandler, ActionId,
|
||||
ActionMetadata, ActionParams, ActionResult, ActionScope, ActionSource, CurrentContext,
|
||||
RegisteredActionGroup,
|
||||
};
|
||||
|
||||
/// Options for listing actions.
|
||||
#[derive(Clone, Debug, Default)]
|
||||
pub struct ListActionsOptions {
|
||||
/// Filter by scope.
|
||||
pub scope: Option<ActionScope>,
|
||||
/// Filter by group.
|
||||
pub group_id: Option<ActionGroupId>,
|
||||
/// Search term for label/description.
|
||||
pub search: Option<String>,
|
||||
}
|
||||
|
||||
/// A registered action with its handler.
|
||||
struct RegisteredAction {
|
||||
/// Action metadata.
|
||||
metadata: ActionMetadata,
|
||||
/// Where the action was registered from.
|
||||
source: ActionSource,
|
||||
/// The handler for this action.
|
||||
handler: Arc<dyn ActionHandler>,
|
||||
}
|
||||
|
||||
/// Central hub for action registration and invocation.
|
||||
///
|
||||
/// The executor owns all action metadata and handlers, ensuring every
|
||||
/// registered action has a handler by construction.
|
||||
pub struct ActionExecutor {
|
||||
/// All registered actions indexed by ID.
|
||||
actions: RwLock<HashMap<ActionId, RegisteredAction>>,
|
||||
|
||||
/// Actions indexed by scope for efficient filtering.
|
||||
scope_index: RwLock<HashMap<ActionScope, Vec<ActionId>>>,
|
||||
|
||||
/// All registered groups indexed by ID.
|
||||
groups: RwLock<HashMap<ActionGroupId, RegisteredActionGroup>>,
|
||||
}
|
||||
|
||||
impl Default for ActionExecutor {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl ActionExecutor {
|
||||
/// Create a new empty executor.
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
actions: RwLock::new(HashMap::new()),
|
||||
scope_index: RwLock::new(HashMap::new()),
|
||||
groups: RwLock::new(HashMap::new()),
|
||||
}
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
// Action Registration
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Register an action with its handler.
|
||||
///
|
||||
/// Every action must have a handler - this is enforced by the API.
|
||||
pub async fn register<H: ActionHandler + 'static>(
|
||||
&self,
|
||||
metadata: ActionMetadata,
|
||||
source: ActionSource,
|
||||
handler: H,
|
||||
) -> Result<ActionId, ActionError> {
|
||||
let id = metadata.id.clone();
|
||||
let scope = metadata.scope.clone();
|
||||
|
||||
let action = RegisteredAction {
|
||||
metadata,
|
||||
source,
|
||||
handler: Arc::new(handler),
|
||||
};
|
||||
|
||||
// Insert action
|
||||
{
|
||||
let mut actions = self.actions.write().await;
|
||||
actions.insert(id.clone(), action);
|
||||
}
|
||||
|
||||
// Update scope index
|
||||
{
|
||||
let mut index = self.scope_index.write().await;
|
||||
index.entry(scope).or_default().push(id.clone());
|
||||
}
|
||||
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
/// Unregister an action.
|
||||
pub async fn unregister(&self, id: &ActionId) -> Result<(), ActionError> {
|
||||
let mut actions = self.actions.write().await;
|
||||
|
||||
let action = actions
|
||||
.remove(id)
|
||||
.ok_or_else(|| ActionError::NotFound(id.clone()))?;
|
||||
|
||||
// Update scope index
|
||||
{
|
||||
let mut index = self.scope_index.write().await;
|
||||
if let Some(ids) = index.get_mut(&action.metadata.scope) {
|
||||
ids.retain(|i| i != id);
|
||||
}
|
||||
}
|
||||
|
||||
// Remove from group if assigned
|
||||
if let Some(group_id) = &action.metadata.group_id {
|
||||
let mut groups = self.groups.write().await;
|
||||
if let Some(group) = groups.get_mut(group_id) {
|
||||
group.action_ids.retain(|i| i != id);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Unregister all actions from a specific source.
|
||||
pub async fn unregister_source(&self, source_id: &str) -> Vec<ActionId> {
|
||||
let actions_to_remove: Vec<ActionId> = {
|
||||
let actions = self.actions.read().await;
|
||||
actions
|
||||
.iter()
|
||||
.filter(|(_, a)| match &a.source {
|
||||
ActionSource::Plugin { ref_id, .. } => ref_id == source_id,
|
||||
ActionSource::Dynamic {
|
||||
source_id: sid, ..
|
||||
} => sid == source_id,
|
||||
ActionSource::Builtin => false,
|
||||
})
|
||||
.map(|(id, _)| id.clone())
|
||||
.collect()
|
||||
};
|
||||
|
||||
for id in &actions_to_remove {
|
||||
let _ = self.unregister(id).await;
|
||||
}
|
||||
|
||||
actions_to_remove
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
// Action Invocation
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Invoke an action with the given context and parameters.
|
||||
///
|
||||
/// This will:
|
||||
/// 1. Look up the action metadata
|
||||
/// 2. Check context availability
|
||||
/// 3. Execute the handler
|
||||
pub async fn invoke(
|
||||
&self,
|
||||
action_id: &ActionId,
|
||||
context: CurrentContext,
|
||||
params: ActionParams,
|
||||
) -> Result<ActionResult, ActionError> {
|
||||
// Get action and handler
|
||||
let (metadata, handler) = {
|
||||
let actions = self.actions.read().await;
|
||||
let action = actions
|
||||
.get(action_id)
|
||||
.ok_or_else(|| ActionError::NotFound(action_id.clone()))?;
|
||||
(action.metadata.clone(), action.handler.clone())
|
||||
};
|
||||
|
||||
// Check context availability
|
||||
let availability = check_context_availability(&metadata.required_context, &context);
|
||||
|
||||
match availability {
|
||||
ActionAvailability::Available | ActionAvailability::AvailableWithPrompt { .. } => {
|
||||
// Context is satisfied, proceed with execution
|
||||
}
|
||||
ActionAvailability::Unavailable { missing_fields } => {
|
||||
return Err(ActionError::ContextMissing { missing_fields });
|
||||
}
|
||||
ActionAvailability::NotFound => {
|
||||
return Err(ActionError::NotFound(action_id.clone()));
|
||||
}
|
||||
}
|
||||
|
||||
// Execute handler
|
||||
handler.handle(context, params).await
|
||||
}
|
||||
|
||||
/// Invoke an action, skipping context validation.
|
||||
///
|
||||
/// Use this when you've already validated the context externally.
|
||||
pub async fn invoke_unchecked(
|
||||
&self,
|
||||
action_id: &ActionId,
|
||||
context: CurrentContext,
|
||||
params: ActionParams,
|
||||
) -> Result<ActionResult, ActionError> {
|
||||
// Get handler
|
||||
let handler = {
|
||||
let actions = self.actions.read().await;
|
||||
let action = actions
|
||||
.get(action_id)
|
||||
.ok_or_else(|| ActionError::NotFound(action_id.clone()))?;
|
||||
action.handler.clone()
|
||||
};
|
||||
|
||||
// Execute handler
|
||||
handler.handle(context, params).await
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
// Action Queries
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Get action metadata by ID.
|
||||
pub async fn get(&self, id: &ActionId) -> Option<ActionMetadata> {
|
||||
let actions = self.actions.read().await;
|
||||
actions.get(id).map(|a| a.metadata.clone())
|
||||
}
|
||||
|
||||
/// List all actions, optionally filtered.
|
||||
pub async fn list(&self, options: ListActionsOptions) -> Vec<ActionMetadata> {
|
||||
let actions = self.actions.read().await;
|
||||
|
||||
let mut result: Vec<_> = actions
|
||||
.values()
|
||||
.filter(|a| {
|
||||
// Scope filter
|
||||
if let Some(scope) = &options.scope {
|
||||
if &a.metadata.scope != scope {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Group filter
|
||||
if let Some(group_id) = &options.group_id {
|
||||
if a.metadata.group_id.as_ref() != Some(group_id) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Search filter
|
||||
if let Some(search) = &options.search {
|
||||
let search = search.to_lowercase();
|
||||
let matches_label = a.metadata.label.to_lowercase().contains(&search);
|
||||
let matches_desc = a
|
||||
.metadata
|
||||
.description
|
||||
.as_ref()
|
||||
.map(|d| d.to_lowercase().contains(&search))
|
||||
.unwrap_or(false);
|
||||
if !matches_label && !matches_desc {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
true
|
||||
})
|
||||
.map(|a| a.metadata.clone())
|
||||
.collect();
|
||||
|
||||
// Sort by order then label
|
||||
result.sort_by(|a, b| a.order.cmp(&b.order).then_with(|| a.label.cmp(&b.label)));
|
||||
|
||||
result
|
||||
}
|
||||
|
||||
/// List actions available in the given context.
|
||||
pub async fn list_available(
|
||||
&self,
|
||||
context: &CurrentContext,
|
||||
options: ListActionsOptions,
|
||||
) -> Vec<(ActionMetadata, ActionAvailability)> {
|
||||
let all_actions = self.list(options).await;
|
||||
|
||||
all_actions
|
||||
.into_iter()
|
||||
.map(|action| {
|
||||
let availability =
|
||||
check_context_availability(&action.required_context, context);
|
||||
(action, availability)
|
||||
})
|
||||
.filter(|(_, availability)| availability.is_available())
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Get availability status for a specific action.
|
||||
pub async fn get_availability(
|
||||
&self,
|
||||
id: &ActionId,
|
||||
context: &CurrentContext,
|
||||
) -> ActionAvailability {
|
||||
let actions = self.actions.read().await;
|
||||
|
||||
match actions.get(id) {
|
||||
Some(action) => {
|
||||
check_context_availability(&action.metadata.required_context, context)
|
||||
}
|
||||
None => ActionAvailability::NotFound,
|
||||
}
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
// Group Registration
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Register an action group.
|
||||
pub async fn register_group(
|
||||
&self,
|
||||
metadata: ActionGroupMetadata,
|
||||
source: ActionGroupSource,
|
||||
) -> Result<ActionGroupId, ActionError> {
|
||||
let id = metadata.id.clone();
|
||||
|
||||
let mut groups = self.groups.write().await;
|
||||
if groups.contains_key(&id) {
|
||||
return Err(ActionError::GroupAlreadyExists(id));
|
||||
}
|
||||
|
||||
groups.insert(
|
||||
id.clone(),
|
||||
RegisteredActionGroup {
|
||||
metadata,
|
||||
action_ids: Vec::new(),
|
||||
source,
|
||||
},
|
||||
);
|
||||
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
/// Unregister a group (does not unregister its actions).
|
||||
pub async fn unregister_group(&self, id: &ActionGroupId) -> Result<(), ActionError> {
|
||||
let mut groups = self.groups.write().await;
|
||||
groups
|
||||
.remove(id)
|
||||
.ok_or_else(|| ActionError::GroupNotFound(id.clone()))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Add an action to a group.
|
||||
pub async fn add_to_group(
|
||||
&self,
|
||||
action_id: &ActionId,
|
||||
group_id: &ActionGroupId,
|
||||
) -> Result<(), ActionError> {
|
||||
// Update action's group_id
|
||||
{
|
||||
let mut actions = self.actions.write().await;
|
||||
let action = actions
|
||||
.get_mut(action_id)
|
||||
.ok_or_else(|| ActionError::NotFound(action_id.clone()))?;
|
||||
action.metadata.group_id = Some(group_id.clone());
|
||||
}
|
||||
|
||||
// Add to group's action list
|
||||
{
|
||||
let mut groups = self.groups.write().await;
|
||||
let group = groups
|
||||
.get_mut(group_id)
|
||||
.ok_or_else(|| ActionError::GroupNotFound(group_id.clone()))?;
|
||||
|
||||
if !group.action_ids.contains(action_id) {
|
||||
group.action_ids.push(action_id.clone());
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
// Group Queries
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Get a group by ID.
|
||||
pub async fn get_group(&self, id: &ActionGroupId) -> Option<ActionGroupMetadata> {
|
||||
let groups = self.groups.read().await;
|
||||
groups.get(id).map(|g| g.metadata.clone())
|
||||
}
|
||||
|
||||
/// List all groups, optionally filtered by scope.
|
||||
pub async fn list_groups(&self, scope: Option<ActionScope>) -> Vec<ActionGroupMetadata> {
|
||||
let groups = self.groups.read().await;
|
||||
|
||||
let mut result: Vec<_> = groups
|
||||
.values()
|
||||
.filter(|g| {
|
||||
scope.as_ref().map_or(true, |s| {
|
||||
g.metadata.scope.as_ref().map_or(true, |gs| gs == s)
|
||||
})
|
||||
})
|
||||
.map(|g| g.metadata.clone())
|
||||
.collect();
|
||||
|
||||
result.sort_by_key(|g| g.order);
|
||||
result
|
||||
}
|
||||
|
||||
/// List all actions in a specific group.
|
||||
pub async fn list_by_group(&self, group_id: &ActionGroupId) -> Vec<ActionMetadata> {
|
||||
let groups = self.groups.read().await;
|
||||
let actions = self.actions.read().await;
|
||||
|
||||
groups
|
||||
.get(group_id)
|
||||
.map(|group| {
|
||||
let mut result: Vec<_> = group
|
||||
.action_ids
|
||||
.iter()
|
||||
.filter_map(|id| actions.get(id).map(|a| a.metadata.clone()))
|
||||
.collect();
|
||||
result.sort_by_key(|a| a.order);
|
||||
result
|
||||
})
|
||||
.unwrap_or_default()
|
||||
}
|
||||
|
||||
/// Get actions organized by their groups.
|
||||
pub async fn list_grouped(&self, scope: Option<ActionScope>) -> Vec<ActionGroupWithActions> {
|
||||
let group_list = self.list_groups(scope).await;
|
||||
let mut result = Vec::new();
|
||||
|
||||
for group in group_list {
|
||||
let actions = self.list_by_group(&group.id).await;
|
||||
result.push(ActionGroupWithActions { group, actions });
|
||||
}
|
||||
|
||||
result
|
||||
}
|
||||
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
// Built-in Registration
|
||||
// ─────────────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Register all built-in groups.
|
||||
pub async fn register_builtin_groups(&self) -> Result<(), ActionError> {
|
||||
for group in crate::groups::builtin::all() {
|
||||
self.register_group(group, ActionGroupSource::Builtin).await?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::{handler_fn, RequiredContext};
|
||||
|
||||
async fn create_test_executor() -> ActionExecutor {
|
||||
let executor = ActionExecutor::new();
|
||||
executor
|
||||
.register(
|
||||
ActionMetadata {
|
||||
id: ActionId::builtin("test", "echo"),
|
||||
label: "Echo".to_string(),
|
||||
description: None,
|
||||
icon: None,
|
||||
scope: ActionScope::Global,
|
||||
keyboard_shortcut: None,
|
||||
requires_selection: false,
|
||||
enabled_condition: None,
|
||||
group_id: None,
|
||||
order: 0,
|
||||
required_context: RequiredContext::default(),
|
||||
},
|
||||
ActionSource::Builtin,
|
||||
handler_fn(|_ctx, params| async move {
|
||||
let msg: String = params.get("message").unwrap_or_default();
|
||||
Ok(ActionResult::with_message(msg))
|
||||
}),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
executor
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_register_and_invoke() {
|
||||
let executor = create_test_executor().await;
|
||||
let action_id = ActionId::builtin("test", "echo");
|
||||
|
||||
let params = ActionParams::from_json(serde_json::json!({
|
||||
"message": "Hello, World!"
|
||||
}));
|
||||
|
||||
let result = executor
|
||||
.invoke(&action_id, CurrentContext::default(), params)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
match result {
|
||||
ActionResult::Success { message, .. } => {
|
||||
assert_eq!(message, Some("Hello, World!".to_string()));
|
||||
}
|
||||
_ => panic!("Expected Success result"),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_invoke_not_found() {
|
||||
let executor = ActionExecutor::new();
|
||||
let action_id = ActionId::builtin("test", "unknown");
|
||||
|
||||
let result = executor
|
||||
.invoke(&action_id, CurrentContext::default(), ActionParams::empty())
|
||||
.await;
|
||||
|
||||
assert!(matches!(result, Err(ActionError::NotFound(_))));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_by_scope() {
|
||||
let executor = ActionExecutor::new();
|
||||
|
||||
executor
|
||||
.register(
|
||||
ActionMetadata {
|
||||
id: ActionId::builtin("global", "one"),
|
||||
label: "Global One".to_string(),
|
||||
description: None,
|
||||
icon: None,
|
||||
scope: ActionScope::Global,
|
||||
keyboard_shortcut: None,
|
||||
requires_selection: false,
|
||||
enabled_condition: None,
|
||||
group_id: None,
|
||||
order: 0,
|
||||
required_context: RequiredContext::default(),
|
||||
},
|
||||
ActionSource::Builtin,
|
||||
handler_fn(|_ctx, _params| async move { Ok(ActionResult::ok()) }),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
executor
|
||||
.register(
|
||||
ActionMetadata {
|
||||
id: ActionId::builtin("http", "one"),
|
||||
label: "HTTP One".to_string(),
|
||||
description: None,
|
||||
icon: None,
|
||||
scope: ActionScope::HttpRequest,
|
||||
keyboard_shortcut: None,
|
||||
requires_selection: false,
|
||||
enabled_condition: None,
|
||||
group_id: None,
|
||||
order: 0,
|
||||
required_context: RequiredContext::default(),
|
||||
},
|
||||
ActionSource::Builtin,
|
||||
handler_fn(|_ctx, _params| async move { Ok(ActionResult::ok()) }),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let global_actions = executor
|
||||
.list(ListActionsOptions {
|
||||
scope: Some(ActionScope::Global),
|
||||
..Default::default()
|
||||
})
|
||||
.await;
|
||||
assert_eq!(global_actions.len(), 1);
|
||||
|
||||
let http_actions = executor
|
||||
.list(ListActionsOptions {
|
||||
scope: Some(ActionScope::HttpRequest),
|
||||
..Default::default()
|
||||
})
|
||||
.await;
|
||||
assert_eq!(http_actions.len(), 1);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_groups() {
|
||||
let executor = ActionExecutor::new();
|
||||
executor.register_builtin_groups().await.unwrap();
|
||||
|
||||
let groups = executor.list_groups(None).await;
|
||||
assert!(!groups.is_empty());
|
||||
|
||||
let export_group = executor.get_group(&ActionGroupId::builtin("export")).await;
|
||||
assert!(export_group.is_some());
|
||||
assert_eq!(export_group.unwrap().name, "Export");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_unregister() {
|
||||
let executor = create_test_executor().await;
|
||||
let action_id = ActionId::builtin("test", "echo");
|
||||
|
||||
assert!(executor.get(&action_id).await.is_some());
|
||||
|
||||
executor.unregister(&action_id).await.unwrap();
|
||||
assert!(executor.get(&action_id).await.is_none());
|
||||
}
|
||||
}
|
||||
@@ -1,208 +0,0 @@
|
||||
//! Action group types and management.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::{ActionId, ActionMetadata, ActionScope};
|
||||
|
||||
/// Unique identifier for an action group.
|
||||
///
|
||||
/// Format: `namespace:group-name`
|
||||
/// - Built-in: `yaak:export`
|
||||
/// - Plugin: `plugin.my-plugin:utilities`
|
||||
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
pub struct ActionGroupId(pub String);
|
||||
|
||||
impl ActionGroupId {
|
||||
/// Create a namespaced group ID.
|
||||
pub fn new(namespace: &str, name: &str) -> Self {
|
||||
Self(format!("{}:{}", namespace, name))
|
||||
}
|
||||
|
||||
/// Create ID for built-in groups.
|
||||
pub fn builtin(name: &str) -> Self {
|
||||
Self::new("yaak", name)
|
||||
}
|
||||
|
||||
/// Create ID for plugin groups.
|
||||
pub fn plugin(plugin_ref_id: &str, name: &str) -> Self {
|
||||
Self::new(&format!("plugin.{}", plugin_ref_id), name)
|
||||
}
|
||||
|
||||
/// Get the raw string value.
|
||||
pub fn as_str(&self) -> &str {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Display for ActionGroupId {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}", self.0)
|
||||
}
|
||||
}
|
||||
|
||||
/// Metadata about an action group.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct ActionGroupMetadata {
|
||||
/// Unique identifier for this group.
|
||||
pub id: ActionGroupId,
|
||||
|
||||
/// Display name for the group.
|
||||
pub name: String,
|
||||
|
||||
/// Optional description of the group's purpose.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub description: Option<String>,
|
||||
|
||||
/// Icon to display for the group.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub icon: Option<String>,
|
||||
|
||||
/// Sort order for displaying groups (lower = earlier).
|
||||
#[serde(default)]
|
||||
pub order: i32,
|
||||
|
||||
/// Optional scope restriction (if set, group only appears in this scope).
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub scope: Option<ActionScope>,
|
||||
}
|
||||
|
||||
/// Where an action group was registered from.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(tag = "type", rename_all = "kebab-case")]
|
||||
pub enum ActionGroupSource {
|
||||
/// Built into Yaak core.
|
||||
Builtin,
|
||||
/// Registered by a plugin.
|
||||
Plugin {
|
||||
/// Plugin reference ID.
|
||||
ref_id: String,
|
||||
/// Plugin name.
|
||||
name: String,
|
||||
},
|
||||
/// Registered at runtime.
|
||||
Dynamic {
|
||||
/// Source identifier.
|
||||
source_id: String,
|
||||
},
|
||||
}
|
||||
|
||||
/// A registered action group with its actions.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct RegisteredActionGroup {
|
||||
/// Group metadata.
|
||||
pub metadata: ActionGroupMetadata,
|
||||
|
||||
/// IDs of actions in this group (ordered by action's order field).
|
||||
pub action_ids: Vec<ActionId>,
|
||||
|
||||
/// Where the group was registered from.
|
||||
pub source: ActionGroupSource,
|
||||
}
|
||||
|
||||
/// A group with its actions for UI rendering.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct ActionGroupWithActions {
|
||||
/// Group metadata.
|
||||
pub group: ActionGroupMetadata,
|
||||
|
||||
/// Actions in this group.
|
||||
pub actions: Vec<ActionMetadata>,
|
||||
}
|
||||
|
||||
/// Built-in action group definitions.
|
||||
pub mod builtin {
|
||||
use super::*;
|
||||
|
||||
/// Export group - export and copy actions.
|
||||
pub fn export() -> ActionGroupMetadata {
|
||||
ActionGroupMetadata {
|
||||
id: ActionGroupId::builtin("export"),
|
||||
name: "Export".into(),
|
||||
description: Some("Export and copy actions".into()),
|
||||
icon: Some("download".into()),
|
||||
order: 100,
|
||||
scope: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Code generation group.
|
||||
pub fn code_generation() -> ActionGroupMetadata {
|
||||
ActionGroupMetadata {
|
||||
id: ActionGroupId::builtin("code-generation"),
|
||||
name: "Code Generation".into(),
|
||||
description: Some("Generate code snippets from requests".into()),
|
||||
icon: Some("code".into()),
|
||||
order: 200,
|
||||
scope: Some(ActionScope::HttpRequest),
|
||||
}
|
||||
}
|
||||
|
||||
/// Send group - request sending actions.
|
||||
pub fn send() -> ActionGroupMetadata {
|
||||
ActionGroupMetadata {
|
||||
id: ActionGroupId::builtin("send"),
|
||||
name: "Send".into(),
|
||||
description: Some("Actions for sending requests".into()),
|
||||
icon: Some("play".into()),
|
||||
order: 50,
|
||||
scope: Some(ActionScope::HttpRequest),
|
||||
}
|
||||
}
|
||||
|
||||
/// Import group.
|
||||
pub fn import() -> ActionGroupMetadata {
|
||||
ActionGroupMetadata {
|
||||
id: ActionGroupId::builtin("import"),
|
||||
name: "Import".into(),
|
||||
description: Some("Import data from files".into()),
|
||||
icon: Some("upload".into()),
|
||||
order: 150,
|
||||
scope: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Workspace management group.
|
||||
pub fn workspace() -> ActionGroupMetadata {
|
||||
ActionGroupMetadata {
|
||||
id: ActionGroupId::builtin("workspace"),
|
||||
name: "Workspace".into(),
|
||||
description: Some("Workspace management actions".into()),
|
||||
icon: Some("folder".into()),
|
||||
order: 300,
|
||||
scope: Some(ActionScope::Workspace),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get all built-in group definitions.
|
||||
pub fn all() -> Vec<ActionGroupMetadata> {
|
||||
vec![send(), export(), import(), code_generation(), workspace()]
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_group_id_creation() {
|
||||
let id = ActionGroupId::builtin("export");
|
||||
assert_eq!(id.as_str(), "yaak:export");
|
||||
|
||||
let plugin_id = ActionGroupId::plugin("my-plugin", "utilities");
|
||||
assert_eq!(plugin_id.as_str(), "plugin.my-plugin:utilities");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_builtin_groups() {
|
||||
let groups = builtin::all();
|
||||
assert!(!groups.is_empty());
|
||||
assert!(groups.iter().any(|g| g.id == ActionGroupId::builtin("export")));
|
||||
}
|
||||
}
|
||||
@@ -1,103 +0,0 @@
|
||||
//! Action handler types and execution.
|
||||
|
||||
use std::future::Future;
|
||||
use std::pin::Pin;
|
||||
use std::sync::Arc;
|
||||
|
||||
use crate::{ActionError, ActionParams, ActionResult, CurrentContext};
|
||||
|
||||
/// A boxed future for async action handlers.
|
||||
pub type BoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + Send + 'a>>;
|
||||
|
||||
/// Function signature for action handlers.
|
||||
pub type ActionHandlerFn = Arc<
|
||||
dyn Fn(CurrentContext, ActionParams) -> BoxFuture<'static, Result<ActionResult, ActionError>>
|
||||
+ Send
|
||||
+ Sync,
|
||||
>;
|
||||
|
||||
/// Trait for types that can handle action invocations.
|
||||
pub trait ActionHandler: Send + Sync {
|
||||
/// Execute the action with the given context and parameters.
|
||||
fn handle(
|
||||
&self,
|
||||
context: CurrentContext,
|
||||
params: ActionParams,
|
||||
) -> BoxFuture<'static, Result<ActionResult, ActionError>>;
|
||||
}
|
||||
|
||||
/// Wrapper to create an ActionHandler from a function.
|
||||
pub struct FnHandler<F>(pub F);
|
||||
|
||||
impl<F, Fut> ActionHandler for FnHandler<F>
|
||||
where
|
||||
F: Fn(CurrentContext, ActionParams) -> Fut + Send + Sync,
|
||||
Fut: Future<Output = Result<ActionResult, ActionError>> + Send + 'static,
|
||||
{
|
||||
fn handle(
|
||||
&self,
|
||||
context: CurrentContext,
|
||||
params: ActionParams,
|
||||
) -> BoxFuture<'static, Result<ActionResult, ActionError>> {
|
||||
Box::pin((self.0)(context, params))
|
||||
}
|
||||
}
|
||||
|
||||
/// Create an action handler from an async function.
|
||||
///
|
||||
/// # Example
|
||||
/// ```ignore
|
||||
/// let handler = handler_fn(|ctx, params| async move {
|
||||
/// Ok(ActionResult::ok())
|
||||
/// });
|
||||
/// ```
|
||||
pub fn handler_fn<F, Fut>(f: F) -> FnHandler<F>
|
||||
where
|
||||
F: Fn(CurrentContext, ActionParams) -> Fut + Send + Sync,
|
||||
Fut: Future<Output = Result<ActionResult, ActionError>> + Send + 'static,
|
||||
{
|
||||
FnHandler(f)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_handler_fn() {
|
||||
let handler = handler_fn(|_ctx, _params| async move { Ok(ActionResult::ok()) });
|
||||
|
||||
let result = handler
|
||||
.handle(CurrentContext::default(), ActionParams::empty())
|
||||
.await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_handler_with_params() {
|
||||
let handler = handler_fn(|_ctx, params| async move {
|
||||
let name: Option<String> = params.get("name");
|
||||
Ok(ActionResult::with_message(format!(
|
||||
"Hello, {}!",
|
||||
name.unwrap_or_else(|| "World".to_string())
|
||||
)))
|
||||
});
|
||||
|
||||
let params = ActionParams::from_json(serde_json::json!({
|
||||
"name": "Yaak"
|
||||
}));
|
||||
|
||||
let result = handler
|
||||
.handle(CurrentContext::default(), params)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
match result {
|
||||
ActionResult::Success { message, .. } => {
|
||||
assert_eq!(message, Some("Hello, Yaak!".to_string()));
|
||||
}
|
||||
_ => panic!("Expected Success result"),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,18 +0,0 @@
|
||||
//! Centralized action system for Yaak.
|
||||
//!
|
||||
//! This crate provides a unified hub for registering and invoking actions
|
||||
//! across all entry points: plugins, Tauri desktop app, CLI, deep links, and MCP server.
|
||||
|
||||
mod context;
|
||||
mod error;
|
||||
mod executor;
|
||||
mod groups;
|
||||
mod handler;
|
||||
mod types;
|
||||
|
||||
pub use context::*;
|
||||
pub use error::*;
|
||||
pub use executor::*;
|
||||
pub use groups::*;
|
||||
pub use handler::*;
|
||||
pub use types::*;
|
||||
@@ -1,273 +0,0 @@
|
||||
//! Core types for the action system.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::{ActionGroupId, RequiredContext};
|
||||
|
||||
/// Unique identifier for an action.
|
||||
///
|
||||
/// Format: `namespace:category:name`
|
||||
/// - Built-in: `yaak:http-request:send`
|
||||
/// - Plugin: `plugin.copy-curl:http-request:copy`
|
||||
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
pub struct ActionId(pub String);
|
||||
|
||||
impl ActionId {
|
||||
/// Create a namespaced action ID.
|
||||
pub fn new(namespace: &str, category: &str, name: &str) -> Self {
|
||||
Self(format!("{}:{}:{}", namespace, category, name))
|
||||
}
|
||||
|
||||
/// Create ID for built-in actions.
|
||||
pub fn builtin(category: &str, name: &str) -> Self {
|
||||
Self::new("yaak", category, name)
|
||||
}
|
||||
|
||||
/// Create ID for plugin actions.
|
||||
pub fn plugin(plugin_ref_id: &str, category: &str, name: &str) -> Self {
|
||||
Self::new(&format!("plugin.{}", plugin_ref_id), category, name)
|
||||
}
|
||||
|
||||
/// Get the raw string value.
|
||||
pub fn as_str(&self) -> &str {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Display for ActionId {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}", self.0)
|
||||
}
|
||||
}
|
||||
|
||||
/// The scope in which an action can be invoked.
|
||||
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
pub enum ActionScope {
|
||||
/// Global actions available everywhere.
|
||||
Global,
|
||||
/// Actions on HTTP requests.
|
||||
HttpRequest,
|
||||
/// Actions on WebSocket requests.
|
||||
WebsocketRequest,
|
||||
/// Actions on gRPC requests.
|
||||
GrpcRequest,
|
||||
/// Actions on workspaces.
|
||||
Workspace,
|
||||
/// Actions on folders.
|
||||
Folder,
|
||||
/// Actions on environments.
|
||||
Environment,
|
||||
/// Actions on cookie jars.
|
||||
CookieJar,
|
||||
}
|
||||
|
||||
/// Metadata about an action for discovery.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct ActionMetadata {
|
||||
/// Unique identifier for this action.
|
||||
pub id: ActionId,
|
||||
|
||||
/// Display label for the action.
|
||||
pub label: String,
|
||||
|
||||
/// Optional description of what the action does.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub description: Option<String>,
|
||||
|
||||
/// Icon name to display.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub icon: Option<String>,
|
||||
|
||||
/// The scope this action applies to.
|
||||
pub scope: ActionScope,
|
||||
|
||||
/// Keyboard shortcut (e.g., "Cmd+Enter").
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub keyboard_shortcut: Option<String>,
|
||||
|
||||
/// Whether the action requires a selection/target.
|
||||
#[serde(default)]
|
||||
pub requires_selection: bool,
|
||||
|
||||
/// Optional condition expression for when action is enabled.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub enabled_condition: Option<String>,
|
||||
|
||||
/// Optional group this action belongs to.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub group_id: Option<ActionGroupId>,
|
||||
|
||||
/// Sort order within a group (lower = earlier).
|
||||
#[serde(default)]
|
||||
pub order: i32,
|
||||
|
||||
/// Context requirements for this action.
|
||||
#[serde(default)]
|
||||
pub required_context: RequiredContext,
|
||||
}
|
||||
|
||||
/// Where an action was registered from.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(tag = "type", rename_all = "kebab-case")]
|
||||
pub enum ActionSource {
|
||||
/// Built into Yaak core.
|
||||
Builtin,
|
||||
/// Registered by a plugin.
|
||||
Plugin {
|
||||
/// Plugin reference ID.
|
||||
ref_id: String,
|
||||
/// Plugin name.
|
||||
name: String,
|
||||
},
|
||||
/// Registered at runtime (e.g., by MCP tools).
|
||||
Dynamic {
|
||||
/// Source identifier.
|
||||
source_id: String,
|
||||
},
|
||||
}
|
||||
|
||||
/// Parameters passed to action handlers.
|
||||
#[derive(Clone, Debug, Default, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
pub struct ActionParams {
|
||||
/// Arbitrary JSON parameters.
|
||||
#[serde(default)]
|
||||
#[ts(type = "unknown")]
|
||||
pub data: serde_json::Value,
|
||||
}
|
||||
|
||||
impl ActionParams {
|
||||
/// Create empty params.
|
||||
pub fn empty() -> Self {
|
||||
Self {
|
||||
data: serde_json::Value::Null,
|
||||
}
|
||||
}
|
||||
|
||||
/// Create params from a JSON value.
|
||||
pub fn from_json(data: serde_json::Value) -> Self {
|
||||
Self { data }
|
||||
}
|
||||
|
||||
/// Get a typed value from the params.
|
||||
pub fn get<T: serde::de::DeserializeOwned>(&self, key: &str) -> Option<T> {
|
||||
self.data
|
||||
.get(key)
|
||||
.and_then(|v| serde_json::from_value(v.clone()).ok())
|
||||
}
|
||||
}
|
||||
|
||||
/// Result of action execution.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(tag = "type", rename_all = "kebab-case")]
|
||||
pub enum ActionResult {
|
||||
/// Action completed successfully.
|
||||
Success {
|
||||
/// Optional data to return.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
#[ts(type = "unknown")]
|
||||
data: Option<serde_json::Value>,
|
||||
/// Optional message to display.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
message: Option<String>,
|
||||
},
|
||||
|
||||
/// Action requires user input to continue.
|
||||
RequiresInput {
|
||||
/// Prompt to show user.
|
||||
prompt: InputPrompt,
|
||||
/// Continuation token.
|
||||
continuation_id: String,
|
||||
},
|
||||
|
||||
/// Action was cancelled by the user.
|
||||
Cancelled,
|
||||
}
|
||||
|
||||
impl ActionResult {
|
||||
/// Create a success result with no data.
|
||||
pub fn ok() -> Self {
|
||||
Self::Success {
|
||||
data: None,
|
||||
message: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a success result with a message.
|
||||
pub fn with_message(message: impl Into<String>) -> Self {
|
||||
Self::Success {
|
||||
data: None,
|
||||
message: Some(message.into()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a success result with data.
|
||||
pub fn with_data(data: serde_json::Value) -> Self {
|
||||
Self::Success {
|
||||
data: Some(data),
|
||||
message: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A prompt for user input.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
#[serde(tag = "type", rename_all = "kebab-case")]
|
||||
pub enum InputPrompt {
|
||||
/// Text input prompt.
|
||||
Text {
|
||||
label: String,
|
||||
placeholder: Option<String>,
|
||||
default_value: Option<String>,
|
||||
},
|
||||
/// Selection prompt.
|
||||
Select {
|
||||
label: String,
|
||||
options: Vec<SelectOption>,
|
||||
},
|
||||
/// Confirmation prompt.
|
||||
Confirm { label: String },
|
||||
}
|
||||
|
||||
/// An option in a select prompt.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, TS)]
|
||||
#[ts(export)]
|
||||
pub struct SelectOption {
|
||||
pub label: String,
|
||||
pub value: String,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_action_id_creation() {
|
||||
let id = ActionId::builtin("http-request", "send");
|
||||
assert_eq!(id.as_str(), "yaak:http-request:send");
|
||||
|
||||
let plugin_id = ActionId::plugin("copy-curl", "http-request", "copy");
|
||||
assert_eq!(plugin_id.as_str(), "plugin.copy-curl:http-request:copy");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_action_params() {
|
||||
let params = ActionParams::from_json(serde_json::json!({
|
||||
"name": "test",
|
||||
"count": 42
|
||||
}));
|
||||
|
||||
assert_eq!(params.get::<String>("name"), Some("test".to_string()));
|
||||
assert_eq!(params.get::<i32>("count"), Some(42));
|
||||
assert_eq!(params.get::<String>("missing"), None);
|
||||
}
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
[package]
|
||||
name = "yaak-common"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
publish = false
|
||||
|
||||
[dependencies]
|
||||
serde_json = { workspace = true }
|
||||
tokio = { workspace = true, features = ["process"] }
|
||||
@@ -1,16 +0,0 @@
|
||||
use std::ffi::OsStr;
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
const CREATE_NO_WINDOW: u32 = 0x0800_0000;
|
||||
|
||||
/// Creates a new `tokio::process::Command` that won't spawn a console window on Windows.
|
||||
pub fn new_xplatform_command<S: AsRef<OsStr>>(program: S) -> tokio::process::Command {
|
||||
#[allow(unused_mut)]
|
||||
let mut cmd = tokio::process::Command::new(program);
|
||||
#[cfg(target_os = "windows")]
|
||||
{
|
||||
use std::os::windows::process::CommandExt;
|
||||
cmd.creation_flags(CREATE_NO_WINDOW);
|
||||
}
|
||||
cmd
|
||||
}
|
||||
@@ -1,3 +0,0 @@
|
||||
pub mod command;
|
||||
pub mod platform;
|
||||
pub mod serde;
|
||||
@@ -1,23 +0,0 @@
|
||||
use serde_json::Value;
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
pub fn get_bool(v: &Value, key: &str, fallback: bool) -> bool {
|
||||
match v.get(key) {
|
||||
None => fallback,
|
||||
Some(v) => v.as_bool().unwrap_or(fallback),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_str<'a>(v: &'a Value, key: &str) -> &'a str {
|
||||
match v.get(key) {
|
||||
None => "",
|
||||
Some(v) => v.as_str().unwrap_or_default(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_str_map<'a>(v: &'a BTreeMap<String, Value>, key: &str) -> &'a str {
|
||||
match v.get(key) {
|
||||
None => "",
|
||||
Some(v) => v.as_str().unwrap_or_default(),
|
||||
}
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
[package]
|
||||
name = "yaak-core"
|
||||
version = "0.0.0"
|
||||
edition = "2024"
|
||||
authors = ["Gregory Schier"]
|
||||
publish = false
|
||||
|
||||
[dependencies]
|
||||
thiserror = { workspace = true }
|
||||
@@ -1,56 +0,0 @@
|
||||
use std::path::PathBuf;
|
||||
|
||||
/// Context for a workspace operation.
|
||||
///
|
||||
/// In Tauri, this is extracted from the WebviewWindow URL.
|
||||
/// In CLI, this is constructed from command arguments or config.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct WorkspaceContext {
|
||||
pub workspace_id: Option<String>,
|
||||
pub environment_id: Option<String>,
|
||||
pub cookie_jar_id: Option<String>,
|
||||
pub request_id: Option<String>,
|
||||
}
|
||||
|
||||
impl WorkspaceContext {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
pub fn with_workspace(mut self, workspace_id: impl Into<String>) -> Self {
|
||||
self.workspace_id = Some(workspace_id.into());
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_environment(mut self, environment_id: impl Into<String>) -> Self {
|
||||
self.environment_id = Some(environment_id.into());
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_cookie_jar(mut self, cookie_jar_id: impl Into<String>) -> Self {
|
||||
self.cookie_jar_id = Some(cookie_jar_id.into());
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_request(mut self, request_id: impl Into<String>) -> Self {
|
||||
self.request_id = Some(request_id.into());
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
/// Application context trait for accessing app-level resources.
|
||||
///
|
||||
/// This abstracts over Tauri's `AppHandle` for path resolution and app identity.
|
||||
/// Implemented by Tauri's AppHandle and by CLI's own context struct.
|
||||
pub trait AppContext: Send + Sync + Clone {
|
||||
/// Returns the path to the application data directory.
|
||||
/// This is where the database and other persistent data are stored.
|
||||
fn app_data_dir(&self) -> PathBuf;
|
||||
|
||||
/// Returns the application identifier (e.g., "app.yaak.desktop").
|
||||
/// Used for keyring access and other platform-specific features.
|
||||
fn app_identifier(&self) -> &str;
|
||||
|
||||
/// Returns true if running in development mode.
|
||||
fn is_dev(&self) -> bool;
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
use thiserror::Error;
|
||||
|
||||
pub type Result<T> = std::result::Result<T, Error>;
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
pub enum Error {
|
||||
#[error("Missing required context: {0}")]
|
||||
MissingContext(String),
|
||||
|
||||
#[error("Configuration error: {0}")]
|
||||
Config(String),
|
||||
|
||||
#[error("IO error: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
}
|
||||
@@ -1,10 +0,0 @@
|
||||
//! Core abstractions for Yaak that work without Tauri.
|
||||
//!
|
||||
//! This crate provides foundational types and traits that allow Yaak's
|
||||
//! business logic to run in both Tauri (desktop app) and CLI contexts.
|
||||
|
||||
mod context;
|
||||
mod error;
|
||||
|
||||
pub use context::{AppContext, WorkspaceContext};
|
||||
pub use error::{Error, Result};
|
||||
@@ -1,17 +0,0 @@
|
||||
import { invoke } from '@tauri-apps/api/core';
|
||||
|
||||
export function enableEncryption(workspaceId: string) {
|
||||
return invoke<void>('cmd_enable_encryption', { workspaceId });
|
||||
}
|
||||
|
||||
export function revealWorkspaceKey(workspaceId: string) {
|
||||
return invoke<string>('cmd_reveal_workspace_key', { workspaceId });
|
||||
}
|
||||
|
||||
export function setWorkspaceKey(args: { workspaceId: string; key: string }) {
|
||||
return invoke<void>('cmd_set_workspace_key', args);
|
||||
}
|
||||
|
||||
export function disableEncryption(workspaceId: string) {
|
||||
return invoke<void>('cmd_disable_encryption', { workspaceId });
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
extern crate core;
|
||||
|
||||
pub mod encryption;
|
||||
pub mod error;
|
||||
pub mod manager;
|
||||
mod master_key;
|
||||
mod workspace_key;
|
||||
@@ -1,30 +0,0 @@
|
||||
use crate::error::Error::GitNotFound;
|
||||
use crate::error::Result;
|
||||
use std::path::Path;
|
||||
use std::process::Stdio;
|
||||
use tokio::process::Command;
|
||||
use yaak_common::command::new_xplatform_command;
|
||||
|
||||
/// Create a git command that runs in the specified directory
|
||||
pub(crate) async fn new_binary_command(dir: &Path) -> Result<Command> {
|
||||
let mut cmd = new_binary_command_global().await?;
|
||||
cmd.arg("-C").arg(dir);
|
||||
Ok(cmd)
|
||||
}
|
||||
|
||||
/// Create a git command without a specific directory (for global operations)
|
||||
pub(crate) async fn new_binary_command_global() -> Result<Command> {
|
||||
// 1. Probe that `git` exists and is runnable
|
||||
let mut probe = new_xplatform_command("git");
|
||||
probe.arg("--version").stdin(Stdio::null()).stdout(Stdio::null()).stderr(Stdio::null());
|
||||
|
||||
let status = probe.status().await.map_err(|_| GitNotFound)?;
|
||||
|
||||
if !status.success() {
|
||||
return Err(GitNotFound);
|
||||
}
|
||||
|
||||
// 2. Build the reusable git command
|
||||
let cmd = new_xplatform_command("git");
|
||||
Ok(cmd)
|
||||
}
|
||||
@@ -1,153 +0,0 @@
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
|
||||
use crate::binary::new_binary_command;
|
||||
use crate::error::Error::GenericError;
|
||||
use crate::error::Result;
|
||||
use std::path::Path;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, TS)]
|
||||
#[serde(rename_all = "snake_case", tag = "type")]
|
||||
#[ts(export, export_to = "gen_git.ts")]
|
||||
pub enum BranchDeleteResult {
|
||||
Success { message: String },
|
||||
NotFullyMerged,
|
||||
}
|
||||
|
||||
pub async fn git_checkout_branch(dir: &Path, branch_name: &str, force: bool) -> Result<String> {
|
||||
let branch_name = branch_name.trim_start_matches("origin/");
|
||||
|
||||
let mut args = vec!["checkout"];
|
||||
if force {
|
||||
args.push("--force");
|
||||
}
|
||||
args.push(branch_name);
|
||||
|
||||
let out = new_binary_command(dir)
|
||||
.await?
|
||||
.args(&args)
|
||||
.output()
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("failed to run git checkout: {e}")))?;
|
||||
|
||||
let stdout = String::from_utf8_lossy(&out.stdout);
|
||||
let stderr = String::from_utf8_lossy(&out.stderr);
|
||||
let combined = format!("{}{}", stdout, stderr);
|
||||
|
||||
if !out.status.success() {
|
||||
return Err(GenericError(format!("Failed to checkout: {}", combined.trim())));
|
||||
}
|
||||
|
||||
Ok(branch_name.to_string())
|
||||
}
|
||||
|
||||
pub async fn git_create_branch(dir: &Path, name: &str, base: Option<&str>) -> Result<()> {
|
||||
let mut cmd = new_binary_command(dir).await?;
|
||||
cmd.arg("branch").arg(name);
|
||||
if let Some(base_branch) = base {
|
||||
cmd.arg(base_branch);
|
||||
}
|
||||
|
||||
let out =
|
||||
cmd.output().await.map_err(|e| GenericError(format!("failed to run git branch: {e}")))?;
|
||||
|
||||
let stdout = String::from_utf8_lossy(&out.stdout);
|
||||
let stderr = String::from_utf8_lossy(&out.stderr);
|
||||
let combined = format!("{}{}", stdout, stderr);
|
||||
|
||||
if !out.status.success() {
|
||||
return Err(GenericError(format!("Failed to create branch: {}", combined.trim())));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn git_delete_branch(dir: &Path, name: &str, force: bool) -> Result<BranchDeleteResult> {
|
||||
let mut cmd = new_binary_command(dir).await?;
|
||||
|
||||
let out =
|
||||
if force { cmd.args(["branch", "-D", name]) } else { cmd.args(["branch", "-d", name]) }
|
||||
.output()
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("failed to run git branch -d: {e}")))?;
|
||||
|
||||
let stdout = String::from_utf8_lossy(&out.stdout);
|
||||
let stderr = String::from_utf8_lossy(&out.stderr);
|
||||
let combined = format!("{}{}", stdout, stderr);
|
||||
|
||||
if !out.status.success() && stderr.to_lowercase().contains("not fully merged") {
|
||||
return Ok(BranchDeleteResult::NotFullyMerged);
|
||||
}
|
||||
|
||||
if !out.status.success() {
|
||||
return Err(GenericError(format!("Failed to delete branch: {}", combined.trim())));
|
||||
}
|
||||
|
||||
Ok(BranchDeleteResult::Success { message: combined })
|
||||
}
|
||||
|
||||
pub async fn git_merge_branch(dir: &Path, name: &str) -> Result<()> {
|
||||
let out = new_binary_command(dir)
|
||||
.await?
|
||||
.args(["merge", name])
|
||||
.output()
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("failed to run git merge: {e}")))?;
|
||||
|
||||
let stdout = String::from_utf8_lossy(&out.stdout);
|
||||
let stderr = String::from_utf8_lossy(&out.stderr);
|
||||
let combined = format!("{}{}", stdout, stderr);
|
||||
|
||||
if !out.status.success() {
|
||||
// Check for merge conflicts
|
||||
if combined.to_lowercase().contains("conflict") {
|
||||
return Err(GenericError(
|
||||
"Merge conflicts detected. Please resolve them manually.".to_string(),
|
||||
));
|
||||
}
|
||||
return Err(GenericError(format!("Failed to merge: {}", combined.trim())));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn git_delete_remote_branch(dir: &Path, name: &str) -> Result<()> {
|
||||
// Remote branch names come in as "origin/branch-name", extract the branch name
|
||||
let branch_name = name.trim_start_matches("origin/");
|
||||
|
||||
let out = new_binary_command(dir)
|
||||
.await?
|
||||
.args(["push", "origin", "--delete", branch_name])
|
||||
.output()
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("failed to run git push --delete: {e}")))?;
|
||||
|
||||
let stdout = String::from_utf8_lossy(&out.stdout);
|
||||
let stderr = String::from_utf8_lossy(&out.stderr);
|
||||
let combined = format!("{}{}", stdout, stderr);
|
||||
|
||||
if !out.status.success() {
|
||||
return Err(GenericError(format!("Failed to delete remote branch: {}", combined.trim())));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn git_rename_branch(dir: &Path, old_name: &str, new_name: &str) -> Result<()> {
|
||||
let out = new_binary_command(dir)
|
||||
.await?
|
||||
.args(["branch", "-m", old_name, new_name])
|
||||
.output()
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("failed to run git branch -m: {e}")))?;
|
||||
|
||||
let stdout = String::from_utf8_lossy(&out.stdout);
|
||||
let stderr = String::from_utf8_lossy(&out.stderr);
|
||||
let combined = format!("{}{}", stdout, stderr);
|
||||
|
||||
if !out.status.success() {
|
||||
return Err(GenericError(format!("Failed to rename branch: {}", combined.trim())));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -1,53 +0,0 @@
|
||||
use crate::binary::new_binary_command;
|
||||
use crate::error::Error::GenericError;
|
||||
use crate::error::Result;
|
||||
use log::info;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
use ts_rs::TS;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, TS)]
|
||||
#[serde(rename_all = "snake_case", tag = "type")]
|
||||
#[ts(export, export_to = "gen_git.ts")]
|
||||
pub enum CloneResult {
|
||||
Success,
|
||||
Cancelled,
|
||||
NeedsCredentials { url: String, error: Option<String> },
|
||||
}
|
||||
|
||||
pub async fn git_clone(url: &str, dir: &Path) -> Result<CloneResult> {
|
||||
let parent = dir.parent().ok_or_else(|| GenericError("Invalid clone directory".to_string()))?;
|
||||
fs::create_dir_all(parent)
|
||||
.map_err(|e| GenericError(format!("Failed to create directory: {e}")))?;
|
||||
let mut cmd = new_binary_command(parent).await?;
|
||||
cmd.args(["clone", url]).arg(dir).env("GIT_TERMINAL_PROMPT", "0");
|
||||
|
||||
let out =
|
||||
cmd.output().await.map_err(|e| GenericError(format!("failed to run git clone: {e}")))?;
|
||||
|
||||
let stdout = String::from_utf8_lossy(&out.stdout);
|
||||
let stderr = String::from_utf8_lossy(&out.stderr);
|
||||
let combined = format!("{}{}", stdout, stderr);
|
||||
let combined_lower = combined.to_lowercase();
|
||||
|
||||
info!("Cloned status={}: {combined}", out.status);
|
||||
|
||||
if !out.status.success() {
|
||||
// Check for credentials error
|
||||
if combined_lower.contains("could not read") {
|
||||
return Ok(CloneResult::NeedsCredentials { url: url.to_string(), error: None });
|
||||
}
|
||||
if combined_lower.contains("unable to access")
|
||||
|| combined_lower.contains("authentication failed")
|
||||
{
|
||||
return Ok(CloneResult::NeedsCredentials {
|
||||
url: url.to_string(),
|
||||
error: Some(combined.to_string()),
|
||||
});
|
||||
}
|
||||
return Err(GenericError(format!("Failed to clone: {}", combined.trim())));
|
||||
}
|
||||
|
||||
Ok(CloneResult::Success)
|
||||
}
|
||||
@@ -1,44 +0,0 @@
|
||||
use crate::binary::new_binary_command_global;
|
||||
use crate::error::Error::GenericError;
|
||||
use crate::error::Result;
|
||||
use std::process::Stdio;
|
||||
use tokio::io::AsyncWriteExt;
|
||||
use url::Url;
|
||||
|
||||
pub async fn git_add_credential(remote_url: &str, username: &str, password: &str) -> Result<()> {
|
||||
let url = Url::parse(remote_url)
|
||||
.map_err(|e| GenericError(format!("Failed to parse remote url {remote_url}: {e:?}")))?;
|
||||
let protocol = url.scheme();
|
||||
let host = url.host_str().unwrap();
|
||||
let path = Some(url.path());
|
||||
|
||||
let mut child = new_binary_command_global()
|
||||
.await?
|
||||
.args(["credential", "approve"])
|
||||
.stdin(Stdio::piped())
|
||||
.stdout(Stdio::null())
|
||||
.spawn()?;
|
||||
|
||||
{
|
||||
let stdin = child.stdin.as_mut().unwrap();
|
||||
stdin.write_all(format!("protocol={}\n", protocol).as_bytes()).await?;
|
||||
stdin.write_all(format!("host={}\n", host).as_bytes()).await?;
|
||||
if let Some(path) = path {
|
||||
if !path.is_empty() {
|
||||
stdin
|
||||
.write_all(format!("path={}\n", path.trim_start_matches('/')).as_bytes())
|
||||
.await?;
|
||||
}
|
||||
}
|
||||
stdin.write_all(format!("username={}\n", username).as_bytes()).await?;
|
||||
stdin.write_all(format!("password={}\n", password).as_bytes()).await?;
|
||||
stdin.write_all(b"\n").await?; // blank line terminator
|
||||
}
|
||||
|
||||
let status = child.wait().await?;
|
||||
if !status.success() {
|
||||
return Err(GenericError("Failed to approve git credential".to_string()));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -1,36 +0,0 @@
|
||||
mod add;
|
||||
mod binary;
|
||||
mod branch;
|
||||
mod clone;
|
||||
mod commit;
|
||||
mod credential;
|
||||
pub mod error;
|
||||
mod fetch;
|
||||
mod init;
|
||||
mod log;
|
||||
|
||||
mod pull;
|
||||
mod push;
|
||||
mod remotes;
|
||||
mod repository;
|
||||
mod status;
|
||||
mod unstage;
|
||||
mod util;
|
||||
|
||||
// Re-export all git functions for external use
|
||||
pub use add::git_add;
|
||||
pub use branch::{
|
||||
BranchDeleteResult, git_checkout_branch, git_create_branch, git_delete_branch,
|
||||
git_delete_remote_branch, git_merge_branch, git_rename_branch,
|
||||
};
|
||||
pub use clone::{CloneResult, git_clone};
|
||||
pub use commit::git_commit;
|
||||
pub use credential::git_add_credential;
|
||||
pub use fetch::git_fetch_all;
|
||||
pub use init::git_init;
|
||||
pub use log::{GitCommit, git_log};
|
||||
pub use pull::{PullResult, git_pull};
|
||||
pub use push::{PushResult, git_push};
|
||||
pub use remotes::{GitRemote, git_add_remote, git_remotes, git_rm_remote};
|
||||
pub use status::{GitStatusSummary, git_status};
|
||||
pub use unstage::git_unstage;
|
||||
@@ -1,89 +0,0 @@
|
||||
use crate::binary::new_binary_command;
|
||||
use crate::error::Error::GenericError;
|
||||
use crate::error::Result;
|
||||
use crate::repository::open_repo;
|
||||
use crate::util::{get_current_branch_name, get_default_remote_for_push_in_repo};
|
||||
use log::info;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::path::Path;
|
||||
use ts_rs::TS;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, TS)]
|
||||
#[serde(rename_all = "snake_case", tag = "type")]
|
||||
#[ts(export, export_to = "gen_git.ts")]
|
||||
pub enum PushResult {
|
||||
Success { message: String },
|
||||
UpToDate,
|
||||
NeedsCredentials { url: String, error: Option<String> },
|
||||
}
|
||||
|
||||
pub async fn git_push(dir: &Path) -> Result<PushResult> {
|
||||
// Extract all git2 data before any await points (git2 types are not Send)
|
||||
let (branch_name, remote_name, remote_url) = {
|
||||
let repo = open_repo(dir)?;
|
||||
let branch_name = get_current_branch_name(&repo)?;
|
||||
let remote = get_default_remote_for_push_in_repo(&repo)?;
|
||||
let remote_name =
|
||||
remote.name().ok_or(GenericError("Failed to get remote name".to_string()))?.to_string();
|
||||
let remote_url =
|
||||
remote.url().ok_or(GenericError("Failed to get remote url".to_string()))?.to_string();
|
||||
(branch_name, remote_name, remote_url)
|
||||
};
|
||||
|
||||
let out = new_binary_command(dir)
|
||||
.await?
|
||||
.args(["push", &remote_name, &branch_name])
|
||||
.env("GIT_TERMINAL_PROMPT", "0")
|
||||
.output()
|
||||
.await
|
||||
.map_err(|e| GenericError(format!("failed to run git push: {e}")))?;
|
||||
|
||||
let stdout = String::from_utf8_lossy(&out.stdout);
|
||||
let stderr = String::from_utf8_lossy(&out.stderr);
|
||||
let combined = stdout + stderr;
|
||||
let combined_lower = combined.to_lowercase();
|
||||
|
||||
info!("Pushed to repo status={} {combined}", out.status);
|
||||
|
||||
// Helper to check if this is a credentials error
|
||||
let is_credentials_error = || {
|
||||
combined_lower.contains("could not read")
|
||||
|| combined_lower.contains("unable to access")
|
||||
|| combined_lower.contains("authentication failed")
|
||||
};
|
||||
|
||||
// Check for explicit rejection indicators first (e.g., protected branch rejections)
|
||||
// These can occur even if some git servers don't properly set exit codes
|
||||
if combined_lower.contains("rejected") || combined_lower.contains("failed to push") {
|
||||
if is_credentials_error() {
|
||||
return Ok(PushResult::NeedsCredentials {
|
||||
url: remote_url.to_string(),
|
||||
error: Some(combined.to_string()),
|
||||
});
|
||||
}
|
||||
return Err(GenericError(format!("Failed to push: {combined}")));
|
||||
}
|
||||
|
||||
// Check exit status for any other failures
|
||||
if !out.status.success() {
|
||||
if combined_lower.contains("could not read") {
|
||||
return Ok(PushResult::NeedsCredentials { url: remote_url.to_string(), error: None });
|
||||
}
|
||||
if combined_lower.contains("unable to access")
|
||||
|| combined_lower.contains("authentication failed")
|
||||
{
|
||||
return Ok(PushResult::NeedsCredentials {
|
||||
url: remote_url.to_string(),
|
||||
error: Some(combined.to_string()),
|
||||
});
|
||||
}
|
||||
return Err(GenericError(format!("Failed to push: {combined}")));
|
||||
}
|
||||
|
||||
// Success cases (exit code 0 and no rejection indicators)
|
||||
if combined_lower.contains("up-to-date") {
|
||||
return Ok(PushResult::UpToDate);
|
||||
}
|
||||
|
||||
Ok(PushResult::Success { message: format!("Pushed to {}/{}", remote_name, branch_name) })
|
||||
}
|
||||
@@ -1,51 +0,0 @@
|
||||
use crate::manager::GrpcStreamError;
|
||||
use prost::DecodeError;
|
||||
use serde::{Serialize, Serializer};
|
||||
use serde_json::Error as SerdeJsonError;
|
||||
use std::io;
|
||||
use thiserror::Error;
|
||||
use tonic::Status;
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
pub enum Error {
|
||||
#[error(transparent)]
|
||||
TlsError(#[from] yaak_tls::error::Error),
|
||||
|
||||
#[error(transparent)]
|
||||
TonicError(#[from] Status),
|
||||
|
||||
#[error("Prost reflect error: {0:?}")]
|
||||
ProstReflectError(#[from] prost_reflect::DescriptorError),
|
||||
|
||||
#[error(transparent)]
|
||||
DeserializerError(#[from] SerdeJsonError),
|
||||
|
||||
#[error(transparent)]
|
||||
GrpcStreamError(#[from] GrpcStreamError),
|
||||
|
||||
#[error(transparent)]
|
||||
GrpcDecodeError(#[from] DecodeError),
|
||||
|
||||
#[error(transparent)]
|
||||
GrpcInvalidMetadataKeyError(#[from] tonic::metadata::errors::InvalidMetadataKey),
|
||||
|
||||
#[error(transparent)]
|
||||
GrpcInvalidMetadataValueError(#[from] tonic::metadata::errors::InvalidMetadataValue),
|
||||
|
||||
#[error(transparent)]
|
||||
IOError(#[from] io::Error),
|
||||
|
||||
#[error("GRPC error: {0}")]
|
||||
GenericError(String),
|
||||
}
|
||||
|
||||
impl Serialize for Error {
|
||||
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
|
||||
where
|
||||
S: Serializer,
|
||||
{
|
||||
serializer.serialize_str(self.to_string().as_ref())
|
||||
}
|
||||
}
|
||||
|
||||
pub type Result<T> = std::result::Result<T, Error>;
|
||||
@@ -1,40 +0,0 @@
|
||||
use crate::error::Result;
|
||||
use hyper_rustls::{HttpsConnector, HttpsConnectorBuilder};
|
||||
use hyper_util::client::legacy::Client;
|
||||
use hyper_util::client::legacy::connect::HttpConnector;
|
||||
use hyper_util::rt::TokioExecutor;
|
||||
use log::info;
|
||||
use tonic::body::BoxBody;
|
||||
use yaak_tls::{ClientCertificateConfig, get_tls_config};
|
||||
|
||||
// I think ALPN breaks this because we're specifying http2_only
|
||||
const WITH_ALPN: bool = false;
|
||||
|
||||
pub(crate) fn get_transport(
|
||||
validate_certificates: bool,
|
||||
client_cert: Option<ClientCertificateConfig>,
|
||||
) -> Result<Client<HttpsConnector<HttpConnector>, BoxBody>> {
|
||||
let tls_config = get_tls_config(validate_certificates, WITH_ALPN, client_cert.clone())?;
|
||||
|
||||
let mut http = HttpConnector::new();
|
||||
http.enforce_http(false);
|
||||
|
||||
let connector = HttpsConnectorBuilder::new()
|
||||
.with_tls_config(tls_config)
|
||||
.https_or_http()
|
||||
.enable_http2()
|
||||
.build();
|
||||
|
||||
let client = Client::builder(TokioExecutor::new())
|
||||
.pool_max_idle_per_host(0)
|
||||
.http2_only(true)
|
||||
.build(connector);
|
||||
|
||||
info!(
|
||||
"Created gRPC client validate_certs={} client_cert={}",
|
||||
validate_certificates,
|
||||
client_cert.is_some()
|
||||
);
|
||||
|
||||
Ok(client)
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
[package]
|
||||
name = "yaak-http"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
publish = false
|
||||
|
||||
[dependencies]
|
||||
async-compression = { version = "0.4", features = ["tokio", "gzip", "deflate", "brotli", "zstd"] }
|
||||
async-trait = "0.1"
|
||||
brotli = "7"
|
||||
bytes = "1.5.0"
|
||||
cookie = "0.18.1"
|
||||
flate2 = "1"
|
||||
futures-util = "0.3"
|
||||
url = "2"
|
||||
zstd = "0.13"
|
||||
hyper-util = { version = "0.1.17", default-features = false, features = ["client-legacy"] }
|
||||
log = { workspace = true }
|
||||
mime_guess = "2.0.5"
|
||||
regex = "1.11.1"
|
||||
reqwest = { workspace = true, features = ["rustls-tls-manual-roots-no-provider", "socks", "http2", "stream"] }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
tokio = { workspace = true, features = ["macros", "rt", "fs", "io-util"] }
|
||||
tokio-util = { version = "0.7", features = ["codec", "io", "io-util"] }
|
||||
tower-service = "0.3.3"
|
||||
urlencoding = "2.1.3"
|
||||
yaak-common = { workspace = true }
|
||||
yaak-models = { workspace = true }
|
||||
yaak-tls = { workspace = true }
|
||||
@@ -1,78 +0,0 @@
|
||||
use std::io;
|
||||
use std::pin::Pin;
|
||||
use std::task::{Context, Poll};
|
||||
use tokio::io::{AsyncRead, ReadBuf};
|
||||
|
||||
/// A stream that chains multiple AsyncRead sources together
|
||||
pub(crate) struct ChainedReader {
|
||||
readers: Vec<ReaderType>,
|
||||
current_index: usize,
|
||||
current_reader: Option<Box<dyn AsyncRead + Send + Unpin + 'static>>,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) enum ReaderType {
|
||||
Bytes(Vec<u8>),
|
||||
FilePath(String),
|
||||
}
|
||||
|
||||
impl ChainedReader {
|
||||
pub(crate) fn new(readers: Vec<ReaderType>) -> Self {
|
||||
Self { readers, current_index: 0, current_reader: None }
|
||||
}
|
||||
}
|
||||
|
||||
impl AsyncRead for ChainedReader {
|
||||
fn poll_read(
|
||||
mut self: Pin<&mut Self>,
|
||||
cx: &mut Context<'_>,
|
||||
buf: &mut ReadBuf<'_>,
|
||||
) -> Poll<io::Result<()>> {
|
||||
loop {
|
||||
// Try to read from current reader if we have one
|
||||
if let Some(ref mut reader) = self.current_reader {
|
||||
let before_len = buf.filled().len();
|
||||
return match Pin::new(reader).poll_read(cx, buf) {
|
||||
Poll::Ready(Ok(())) => {
|
||||
if buf.filled().len() == before_len && buf.remaining() > 0 {
|
||||
// Current reader is exhausted, move to next
|
||||
self.current_reader = None;
|
||||
continue;
|
||||
}
|
||||
Poll::Ready(Ok(()))
|
||||
}
|
||||
Poll::Ready(Err(e)) => Poll::Ready(Err(e)),
|
||||
Poll::Pending => Poll::Pending,
|
||||
};
|
||||
}
|
||||
|
||||
// We need to get the next reader
|
||||
if self.current_index >= self.readers.len() {
|
||||
// No more readers
|
||||
return Poll::Ready(Ok(()));
|
||||
}
|
||||
|
||||
// Get the next reader
|
||||
let reader_type = self.readers[self.current_index].clone();
|
||||
self.current_index += 1;
|
||||
|
||||
match reader_type {
|
||||
ReaderType::Bytes(bytes) => {
|
||||
self.current_reader = Some(Box::new(io::Cursor::new(bytes)));
|
||||
}
|
||||
ReaderType::FilePath(path) => {
|
||||
// We need to handle file opening synchronously in poll_read
|
||||
// This is a limitation - we'll use blocking file open
|
||||
match std::fs::File::open(&path) {
|
||||
Ok(file) => {
|
||||
// Convert std File to tokio File
|
||||
let tokio_file = tokio::fs::File::from_std(file);
|
||||
self.current_reader = Some(Box::new(tokio_file));
|
||||
}
|
||||
Err(e) => return Poll::Ready(Err(e)),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,484 +0,0 @@
|
||||
//! Custom cookie handling for HTTP requests
|
||||
//!
|
||||
//! This module provides cookie storage and matching functionality that was previously
|
||||
//! delegated to reqwest. It implements RFC 6265 cookie domain and path matching.
|
||||
|
||||
use log::debug;
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::time::{Duration, SystemTime, UNIX_EPOCH};
|
||||
use url::Url;
|
||||
use yaak_models::models::{Cookie, CookieDomain, CookieExpires};
|
||||
|
||||
/// A thread-safe cookie store that can be shared across requests
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct CookieStore {
|
||||
cookies: Arc<Mutex<Vec<Cookie>>>,
|
||||
}
|
||||
|
||||
impl Default for CookieStore {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl CookieStore {
|
||||
/// Create a new empty cookie store
|
||||
pub fn new() -> Self {
|
||||
Self { cookies: Arc::new(Mutex::new(Vec::new())) }
|
||||
}
|
||||
|
||||
/// Create a cookie store from existing cookies
|
||||
pub fn from_cookies(cookies: Vec<Cookie>) -> Self {
|
||||
Self { cookies: Arc::new(Mutex::new(cookies)) }
|
||||
}
|
||||
|
||||
/// Get all cookies (for persistence)
|
||||
pub fn get_all_cookies(&self) -> Vec<Cookie> {
|
||||
self.cookies.lock().unwrap().clone()
|
||||
}
|
||||
|
||||
/// Get the Cookie header value for the given URL
|
||||
pub fn get_cookie_header(&self, url: &Url) -> Option<String> {
|
||||
let cookies = self.cookies.lock().unwrap();
|
||||
let now = SystemTime::now();
|
||||
|
||||
let matching_cookies: Vec<_> = cookies
|
||||
.iter()
|
||||
.filter(|cookie| self.cookie_matches(cookie, url, &now))
|
||||
.filter_map(|cookie| {
|
||||
// Parse the raw cookie to get name=value
|
||||
parse_cookie_name_value(&cookie.raw_cookie)
|
||||
})
|
||||
.collect();
|
||||
|
||||
if matching_cookies.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(
|
||||
matching_cookies
|
||||
.into_iter()
|
||||
.map(|(name, value)| format!("{}={}", name, value))
|
||||
.collect::<Vec<_>>()
|
||||
.join("; "),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse Set-Cookie headers and add cookies to the store
|
||||
pub fn store_cookies_from_response(&self, url: &Url, set_cookie_headers: &[String]) {
|
||||
let mut cookies = self.cookies.lock().unwrap();
|
||||
|
||||
for header_value in set_cookie_headers {
|
||||
if let Some(cookie) = parse_set_cookie(header_value, url) {
|
||||
// Remove any existing cookie with the same name and domain
|
||||
cookies.retain(|existing| !cookies_match(existing, &cookie));
|
||||
debug!(
|
||||
"Storing cookie: {} for domain {:?}",
|
||||
parse_cookie_name_value(&cookie.raw_cookie)
|
||||
.map(|(n, _)| n)
|
||||
.unwrap_or_else(|| "unknown".to_string()),
|
||||
cookie.domain
|
||||
);
|
||||
cookies.push(cookie);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if a cookie matches the given URL
|
||||
fn cookie_matches(&self, cookie: &Cookie, url: &Url, now: &SystemTime) -> bool {
|
||||
// Check expiration
|
||||
if let CookieExpires::AtUtc(expiry_str) = &cookie.expires {
|
||||
if let Ok(expiry) = parse_cookie_date(expiry_str) {
|
||||
if expiry < *now {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check domain
|
||||
let url_host = match url.host_str() {
|
||||
Some(h) => h.to_lowercase(),
|
||||
None => return false,
|
||||
};
|
||||
|
||||
let domain_matches = match &cookie.domain {
|
||||
CookieDomain::HostOnly(domain) => url_host == domain.to_lowercase(),
|
||||
CookieDomain::Suffix(domain) => {
|
||||
let domain_lower = domain.to_lowercase();
|
||||
url_host == domain_lower || url_host.ends_with(&format!(".{}", domain_lower))
|
||||
}
|
||||
// NotPresent and Empty should never occur in practice since we always set domain
|
||||
// when parsing Set-Cookie headers. Treat as non-matching to be safe.
|
||||
CookieDomain::NotPresent | CookieDomain::Empty => false,
|
||||
};
|
||||
|
||||
if !domain_matches {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check path
|
||||
let (cookie_path, _) = &cookie.path;
|
||||
let url_path = url.path();
|
||||
|
||||
path_matches(url_path, cookie_path)
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse name=value from a cookie string (raw_cookie format)
|
||||
fn parse_cookie_name_value(raw_cookie: &str) -> Option<(String, String)> {
|
||||
// The raw_cookie typically looks like "name=value" or "name=value; attr1; attr2=..."
|
||||
let first_part = raw_cookie.split(';').next()?;
|
||||
let mut parts = first_part.splitn(2, '=');
|
||||
let name = parts.next()?.trim().to_string();
|
||||
let value = parts.next().unwrap_or("").trim().to_string();
|
||||
|
||||
if name.is_empty() { None } else { Some((name, value)) }
|
||||
}
|
||||
|
||||
/// Parse a Set-Cookie header into a Cookie
|
||||
fn parse_set_cookie(header_value: &str, request_url: &Url) -> Option<Cookie> {
|
||||
let parsed = cookie::Cookie::parse(header_value).ok()?;
|
||||
|
||||
let raw_cookie = format!("{}={}", parsed.name(), parsed.value());
|
||||
|
||||
// Determine domain
|
||||
let domain = if let Some(domain_attr) = parsed.domain() {
|
||||
// Domain attribute present - this is a suffix match
|
||||
let domain = domain_attr.trim_start_matches('.').to_lowercase();
|
||||
|
||||
// Reject single-component domains (TLDs) except localhost
|
||||
if is_single_component_domain(&domain) && !is_localhost(&domain) {
|
||||
debug!("Rejecting cookie with single-component domain: {}", domain);
|
||||
return None;
|
||||
}
|
||||
|
||||
CookieDomain::Suffix(domain)
|
||||
} else {
|
||||
// No domain attribute - host-only cookie
|
||||
CookieDomain::HostOnly(request_url.host_str().unwrap_or("").to_lowercase())
|
||||
};
|
||||
|
||||
// Determine expiration
|
||||
let expires = if let Some(max_age) = parsed.max_age() {
|
||||
let duration = Duration::from_secs(max_age.whole_seconds().max(0) as u64);
|
||||
let expiry = SystemTime::now() + duration;
|
||||
let expiry_secs = expiry.duration_since(UNIX_EPOCH).unwrap_or_default().as_secs();
|
||||
CookieExpires::AtUtc(format!("{}", expiry_secs))
|
||||
} else if let Some(expires_time) = parsed.expires() {
|
||||
match expires_time {
|
||||
cookie::Expiration::DateTime(dt) => {
|
||||
let timestamp = dt.unix_timestamp();
|
||||
CookieExpires::AtUtc(format!("{}", timestamp))
|
||||
}
|
||||
cookie::Expiration::Session => CookieExpires::SessionEnd,
|
||||
}
|
||||
} else {
|
||||
CookieExpires::SessionEnd
|
||||
};
|
||||
|
||||
// Determine path
|
||||
let path = if let Some(path_attr) = parsed.path() {
|
||||
(path_attr.to_string(), true)
|
||||
} else {
|
||||
// Default path is the directory of the request URI
|
||||
let default_path = default_cookie_path(request_url.path());
|
||||
(default_path, false)
|
||||
};
|
||||
|
||||
Some(Cookie { raw_cookie, domain, expires, path })
|
||||
}
|
||||
|
||||
/// Get the default cookie path from a request path (RFC 6265 Section 5.1.4)
|
||||
fn default_cookie_path(request_path: &str) -> String {
|
||||
if request_path.is_empty() || !request_path.starts_with('/') {
|
||||
return "/".to_string();
|
||||
}
|
||||
|
||||
// Find the last slash
|
||||
if let Some(last_slash) = request_path.rfind('/') {
|
||||
if last_slash == 0 { "/".to_string() } else { request_path[..last_slash].to_string() }
|
||||
} else {
|
||||
"/".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if a request path matches a cookie path (RFC 6265 Section 5.1.4)
|
||||
fn path_matches(request_path: &str, cookie_path: &str) -> bool {
|
||||
if request_path == cookie_path {
|
||||
return true;
|
||||
}
|
||||
|
||||
if request_path.starts_with(cookie_path) {
|
||||
// Cookie path must end with / or the char after cookie_path in request_path must be /
|
||||
if cookie_path.ends_with('/') {
|
||||
return true;
|
||||
}
|
||||
if request_path.chars().nth(cookie_path.len()) == Some('/') {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
false
|
||||
}
|
||||
|
||||
/// Check if two cookies match (same name and domain)
|
||||
fn cookies_match(a: &Cookie, b: &Cookie) -> bool {
|
||||
let name_a = parse_cookie_name_value(&a.raw_cookie).map(|(n, _)| n);
|
||||
let name_b = parse_cookie_name_value(&b.raw_cookie).map(|(n, _)| n);
|
||||
|
||||
if name_a != name_b {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check domain match
|
||||
match (&a.domain, &b.domain) {
|
||||
(CookieDomain::HostOnly(d1), CookieDomain::HostOnly(d2)) => {
|
||||
d1.to_lowercase() == d2.to_lowercase()
|
||||
}
|
||||
(CookieDomain::Suffix(d1), CookieDomain::Suffix(d2)) => {
|
||||
d1.to_lowercase() == d2.to_lowercase()
|
||||
}
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse a cookie date string (Unix timestamp in our format)
|
||||
fn parse_cookie_date(date_str: &str) -> Result<SystemTime, ()> {
|
||||
let timestamp: i64 = date_str.parse().map_err(|_| ())?;
|
||||
let duration = Duration::from_secs(timestamp.max(0) as u64);
|
||||
Ok(UNIX_EPOCH + duration)
|
||||
}
|
||||
|
||||
/// Check if a domain is a single-component domain (TLD)
|
||||
/// e.g., "com", "org", "net" - domains without any dots
|
||||
fn is_single_component_domain(domain: &str) -> bool {
|
||||
// Empty or only dots
|
||||
let trimmed = domain.trim_matches('.');
|
||||
if trimmed.is_empty() {
|
||||
return true;
|
||||
}
|
||||
// IPv6 addresses use colons, not dots - don't consider them single-component
|
||||
if domain.contains(':') {
|
||||
return false;
|
||||
}
|
||||
!trimmed.contains('.')
|
||||
}
|
||||
|
||||
/// Check if a domain is localhost or a localhost variant
|
||||
fn is_localhost(domain: &str) -> bool {
|
||||
let lower = domain.to_lowercase();
|
||||
lower == "localhost"
|
||||
|| lower.ends_with(".localhost")
|
||||
|| lower == "127.0.0.1"
|
||||
|| lower == "::1"
|
||||
|| lower == "[::1]"
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_parse_cookie_name_value() {
|
||||
assert_eq!(
|
||||
parse_cookie_name_value("session=abc123"),
|
||||
Some(("session".to_string(), "abc123".to_string()))
|
||||
);
|
||||
assert_eq!(
|
||||
parse_cookie_name_value("name=value; Path=/; HttpOnly"),
|
||||
Some(("name".to_string(), "value".to_string()))
|
||||
);
|
||||
assert_eq!(parse_cookie_name_value("empty="), Some(("empty".to_string(), "".to_string())));
|
||||
assert_eq!(parse_cookie_name_value(""), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_path_matches() {
|
||||
assert!(path_matches("/", "/"));
|
||||
assert!(path_matches("/foo", "/"));
|
||||
assert!(path_matches("/foo/bar", "/foo"));
|
||||
assert!(path_matches("/foo/bar", "/foo/"));
|
||||
assert!(!path_matches("/foobar", "/foo"));
|
||||
assert!(!path_matches("/foo", "/foo/bar"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_default_cookie_path() {
|
||||
assert_eq!(default_cookie_path("/"), "/");
|
||||
assert_eq!(default_cookie_path("/foo"), "/");
|
||||
assert_eq!(default_cookie_path("/foo/bar"), "/foo");
|
||||
assert_eq!(default_cookie_path("/foo/bar/baz"), "/foo/bar");
|
||||
assert_eq!(default_cookie_path(""), "/");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cookie_store_basic() {
|
||||
let store = CookieStore::new();
|
||||
let url = Url::parse("https://example.com/path").unwrap();
|
||||
|
||||
// Initially empty
|
||||
assert!(store.get_cookie_header(&url).is_none());
|
||||
|
||||
// Add a cookie
|
||||
store.store_cookies_from_response(&url, &["session=abc123".to_string()]);
|
||||
|
||||
// Should now have the cookie
|
||||
let header = store.get_cookie_header(&url);
|
||||
assert_eq!(header, Some("session=abc123".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cookie_domain_matching() {
|
||||
let store = CookieStore::new();
|
||||
let url = Url::parse("https://example.com/").unwrap();
|
||||
|
||||
// Cookie with domain attribute (suffix match)
|
||||
store.store_cookies_from_response(
|
||||
&url,
|
||||
&["domain_cookie=value; Domain=example.com".to_string()],
|
||||
);
|
||||
|
||||
// Should match example.com
|
||||
assert!(store.get_cookie_header(&url).is_some());
|
||||
|
||||
// Should match subdomain
|
||||
let subdomain_url = Url::parse("https://sub.example.com/").unwrap();
|
||||
assert!(store.get_cookie_header(&subdomain_url).is_some());
|
||||
|
||||
// Should not match different domain
|
||||
let other_url = Url::parse("https://other.com/").unwrap();
|
||||
assert!(store.get_cookie_header(&other_url).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cookie_path_matching() {
|
||||
let store = CookieStore::new();
|
||||
let url = Url::parse("https://example.com/api/v1").unwrap();
|
||||
|
||||
// Cookie with path
|
||||
store.store_cookies_from_response(&url, &["api_cookie=value; Path=/api".to_string()]);
|
||||
|
||||
// Should match /api/v1
|
||||
assert!(store.get_cookie_header(&url).is_some());
|
||||
|
||||
// Should match /api
|
||||
let api_url = Url::parse("https://example.com/api").unwrap();
|
||||
assert!(store.get_cookie_header(&api_url).is_some());
|
||||
|
||||
// Should not match /other
|
||||
let other_url = Url::parse("https://example.com/other").unwrap();
|
||||
assert!(store.get_cookie_header(&other_url).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cookie_replacement() {
|
||||
let store = CookieStore::new();
|
||||
let url = Url::parse("https://example.com/").unwrap();
|
||||
|
||||
// Add a cookie
|
||||
store.store_cookies_from_response(&url, &["session=old".to_string()]);
|
||||
assert_eq!(store.get_cookie_header(&url), Some("session=old".to_string()));
|
||||
|
||||
// Replace with new value
|
||||
store.store_cookies_from_response(&url, &["session=new".to_string()]);
|
||||
assert_eq!(store.get_cookie_header(&url), Some("session=new".to_string()));
|
||||
|
||||
// Should only have one cookie
|
||||
assert_eq!(store.get_all_cookies().len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_is_single_component_domain() {
|
||||
// Single-component domains (TLDs)
|
||||
assert!(is_single_component_domain("com"));
|
||||
assert!(is_single_component_domain("org"));
|
||||
assert!(is_single_component_domain("net"));
|
||||
assert!(is_single_component_domain("localhost")); // Still single-component, but allowed separately
|
||||
|
||||
// Multi-component domains
|
||||
assert!(!is_single_component_domain("example.com"));
|
||||
assert!(!is_single_component_domain("sub.example.com"));
|
||||
assert!(!is_single_component_domain("co.uk"));
|
||||
|
||||
// Edge cases
|
||||
assert!(is_single_component_domain("")); // Empty is treated as single-component
|
||||
assert!(is_single_component_domain(".")); // Only dots
|
||||
assert!(is_single_component_domain("..")); // Only dots
|
||||
|
||||
// IPv6 addresses (have colons, not dots)
|
||||
assert!(!is_single_component_domain("::1")); // IPv6 localhost
|
||||
assert!(!is_single_component_domain("[::1]")); // Bracketed IPv6
|
||||
assert!(!is_single_component_domain("2001:db8::1")); // IPv6 address
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_is_localhost() {
|
||||
// Localhost variants
|
||||
assert!(is_localhost("localhost"));
|
||||
assert!(is_localhost("LOCALHOST")); // Case-insensitive
|
||||
assert!(is_localhost("sub.localhost"));
|
||||
assert!(is_localhost("app.sub.localhost"));
|
||||
|
||||
// IP localhost
|
||||
assert!(is_localhost("127.0.0.1"));
|
||||
assert!(is_localhost("::1"));
|
||||
assert!(is_localhost("[::1]"));
|
||||
|
||||
// Not localhost
|
||||
assert!(!is_localhost("example.com"));
|
||||
assert!(!is_localhost("localhost.com")); // .com domain, not localhost
|
||||
assert!(!is_localhost("notlocalhost"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_reject_tld_cookies() {
|
||||
let store = CookieStore::new();
|
||||
let url = Url::parse("https://example.com/").unwrap();
|
||||
|
||||
// Try to set a cookie with Domain=com (TLD)
|
||||
store.store_cookies_from_response(&url, &["bad=cookie; Domain=com".to_string()]);
|
||||
|
||||
// Should be rejected - no cookies stored
|
||||
assert_eq!(store.get_all_cookies().len(), 0);
|
||||
assert!(store.get_cookie_header(&url).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_allow_localhost_cookies() {
|
||||
let store = CookieStore::new();
|
||||
let url = Url::parse("http://localhost:3000/").unwrap();
|
||||
|
||||
// Cookie with Domain=localhost should be allowed
|
||||
store.store_cookies_from_response(&url, &["session=abc; Domain=localhost".to_string()]);
|
||||
|
||||
// Should be accepted
|
||||
assert_eq!(store.get_all_cookies().len(), 1);
|
||||
assert!(store.get_cookie_header(&url).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_allow_127_0_0_1_cookies() {
|
||||
let store = CookieStore::new();
|
||||
let url = Url::parse("http://127.0.0.1:8080/").unwrap();
|
||||
|
||||
// Cookie without Domain attribute (host-only) should work
|
||||
store.store_cookies_from_response(&url, &["session=xyz".to_string()]);
|
||||
|
||||
// Should be accepted
|
||||
assert_eq!(store.get_all_cookies().len(), 1);
|
||||
assert!(store.get_cookie_header(&url).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_allow_normal_domain_cookies() {
|
||||
let store = CookieStore::new();
|
||||
let url = Url::parse("https://example.com/").unwrap();
|
||||
|
||||
// Cookie with valid domain should be allowed
|
||||
store.store_cookies_from_response(&url, &["session=abc; Domain=example.com".to_string()]);
|
||||
|
||||
// Should be accepted
|
||||
assert_eq!(store.get_all_cookies().len(), 1);
|
||||
assert!(store.get_cookie_header(&url).is_some());
|
||||
}
|
||||
}
|
||||
@@ -1,188 +0,0 @@
|
||||
use crate::error::{Error, Result};
|
||||
use async_compression::tokio::bufread::{
|
||||
BrotliDecoder, DeflateDecoder as AsyncDeflateDecoder, GzipDecoder,
|
||||
ZstdDecoder as AsyncZstdDecoder,
|
||||
};
|
||||
use flate2::read::{DeflateDecoder, GzDecoder};
|
||||
use std::io::Read;
|
||||
use tokio::io::{AsyncBufRead, AsyncRead};
|
||||
|
||||
/// Supported compression encodings
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum ContentEncoding {
|
||||
Gzip,
|
||||
Deflate,
|
||||
Brotli,
|
||||
Zstd,
|
||||
Identity,
|
||||
}
|
||||
|
||||
impl ContentEncoding {
|
||||
/// Parse a Content-Encoding header value into an encoding type.
|
||||
/// Returns Identity for unknown or missing encodings.
|
||||
pub fn from_header(value: Option<&str>) -> Self {
|
||||
match value.map(|s| s.trim().to_lowercase()).as_deref() {
|
||||
Some("gzip") | Some("x-gzip") => ContentEncoding::Gzip,
|
||||
Some("deflate") => ContentEncoding::Deflate,
|
||||
Some("br") => ContentEncoding::Brotli,
|
||||
Some("zstd") => ContentEncoding::Zstd,
|
||||
_ => ContentEncoding::Identity,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Result of decompression, containing both the decompressed data and size info
|
||||
#[derive(Debug)]
|
||||
pub struct DecompressResult {
|
||||
pub data: Vec<u8>,
|
||||
pub compressed_size: u64,
|
||||
pub decompressed_size: u64,
|
||||
}
|
||||
|
||||
/// Decompress data based on the Content-Encoding.
|
||||
/// Returns the original data unchanged if encoding is Identity or unknown.
|
||||
pub fn decompress(data: Vec<u8>, encoding: ContentEncoding) -> Result<DecompressResult> {
|
||||
let compressed_size = data.len() as u64;
|
||||
|
||||
let decompressed = match encoding {
|
||||
ContentEncoding::Identity => data,
|
||||
ContentEncoding::Gzip => decompress_gzip(&data)?,
|
||||
ContentEncoding::Deflate => decompress_deflate(&data)?,
|
||||
ContentEncoding::Brotli => decompress_brotli(&data)?,
|
||||
ContentEncoding::Zstd => decompress_zstd(&data)?,
|
||||
};
|
||||
|
||||
let decompressed_size = decompressed.len() as u64;
|
||||
|
||||
Ok(DecompressResult { data: decompressed, compressed_size, decompressed_size })
|
||||
}
|
||||
|
||||
fn decompress_gzip(data: &[u8]) -> Result<Vec<u8>> {
|
||||
let mut decoder = GzDecoder::new(data);
|
||||
let mut decompressed = Vec::new();
|
||||
decoder
|
||||
.read_to_end(&mut decompressed)
|
||||
.map_err(|e| Error::DecompressionError(format!("gzip decompression failed: {}", e)))?;
|
||||
Ok(decompressed)
|
||||
}
|
||||
|
||||
fn decompress_deflate(data: &[u8]) -> Result<Vec<u8>> {
|
||||
let mut decoder = DeflateDecoder::new(data);
|
||||
let mut decompressed = Vec::new();
|
||||
decoder
|
||||
.read_to_end(&mut decompressed)
|
||||
.map_err(|e| Error::DecompressionError(format!("deflate decompression failed: {}", e)))?;
|
||||
Ok(decompressed)
|
||||
}
|
||||
|
||||
fn decompress_brotli(data: &[u8]) -> Result<Vec<u8>> {
|
||||
let mut decompressed = Vec::new();
|
||||
brotli::BrotliDecompress(&mut std::io::Cursor::new(data), &mut decompressed)
|
||||
.map_err(|e| Error::DecompressionError(format!("brotli decompression failed: {}", e)))?;
|
||||
Ok(decompressed)
|
||||
}
|
||||
|
||||
fn decompress_zstd(data: &[u8]) -> Result<Vec<u8>> {
|
||||
zstd::stream::decode_all(std::io::Cursor::new(data))
|
||||
.map_err(|e| Error::DecompressionError(format!("zstd decompression failed: {}", e)))
|
||||
}
|
||||
|
||||
/// Create a streaming decompressor that wraps an async reader.
|
||||
/// Returns an AsyncRead that decompresses data on-the-fly.
|
||||
pub fn streaming_decoder<R: AsyncBufRead + Unpin + Send + 'static>(
|
||||
reader: R,
|
||||
encoding: ContentEncoding,
|
||||
) -> Box<dyn AsyncRead + Unpin + Send> {
|
||||
match encoding {
|
||||
ContentEncoding::Identity => Box::new(reader),
|
||||
ContentEncoding::Gzip => Box::new(GzipDecoder::new(reader)),
|
||||
ContentEncoding::Deflate => Box::new(AsyncDeflateDecoder::new(reader)),
|
||||
ContentEncoding::Brotli => Box::new(BrotliDecoder::new(reader)),
|
||||
ContentEncoding::Zstd => Box::new(AsyncZstdDecoder::new(reader)),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use flate2::Compression;
|
||||
use flate2::write::GzEncoder;
|
||||
use std::io::Write;
|
||||
|
||||
#[test]
|
||||
fn test_content_encoding_from_header() {
|
||||
assert_eq!(ContentEncoding::from_header(Some("gzip")), ContentEncoding::Gzip);
|
||||
assert_eq!(ContentEncoding::from_header(Some("x-gzip")), ContentEncoding::Gzip);
|
||||
assert_eq!(ContentEncoding::from_header(Some("GZIP")), ContentEncoding::Gzip);
|
||||
assert_eq!(ContentEncoding::from_header(Some("deflate")), ContentEncoding::Deflate);
|
||||
assert_eq!(ContentEncoding::from_header(Some("br")), ContentEncoding::Brotli);
|
||||
assert_eq!(ContentEncoding::from_header(Some("zstd")), ContentEncoding::Zstd);
|
||||
assert_eq!(ContentEncoding::from_header(Some("identity")), ContentEncoding::Identity);
|
||||
assert_eq!(ContentEncoding::from_header(Some("unknown")), ContentEncoding::Identity);
|
||||
assert_eq!(ContentEncoding::from_header(None), ContentEncoding::Identity);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_decompress_identity() {
|
||||
let data = b"hello world".to_vec();
|
||||
let result = decompress(data.clone(), ContentEncoding::Identity).unwrap();
|
||||
assert_eq!(result.data, data);
|
||||
assert_eq!(result.compressed_size, 11);
|
||||
assert_eq!(result.decompressed_size, 11);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_decompress_gzip() {
|
||||
// Compress some data with gzip
|
||||
let original = b"hello world, this is a test of gzip compression";
|
||||
let mut encoder = GzEncoder::new(Vec::new(), Compression::default());
|
||||
encoder.write_all(original).unwrap();
|
||||
let compressed = encoder.finish().unwrap();
|
||||
|
||||
let result = decompress(compressed.clone(), ContentEncoding::Gzip).unwrap();
|
||||
assert_eq!(result.data, original);
|
||||
assert_eq!(result.compressed_size, compressed.len() as u64);
|
||||
assert_eq!(result.decompressed_size, original.len() as u64);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_decompress_deflate() {
|
||||
// Compress some data with deflate
|
||||
let original = b"hello world, this is a test of deflate compression";
|
||||
let mut encoder = flate2::write::DeflateEncoder::new(Vec::new(), Compression::default());
|
||||
encoder.write_all(original).unwrap();
|
||||
let compressed = encoder.finish().unwrap();
|
||||
|
||||
let result = decompress(compressed.clone(), ContentEncoding::Deflate).unwrap();
|
||||
assert_eq!(result.data, original);
|
||||
assert_eq!(result.compressed_size, compressed.len() as u64);
|
||||
assert_eq!(result.decompressed_size, original.len() as u64);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_decompress_brotli() {
|
||||
// Compress some data with brotli
|
||||
let original = b"hello world, this is a test of brotli compression";
|
||||
let mut compressed = Vec::new();
|
||||
let mut writer = brotli::CompressorWriter::new(&mut compressed, 4096, 4, 22);
|
||||
writer.write_all(original).unwrap();
|
||||
drop(writer);
|
||||
|
||||
let result = decompress(compressed.clone(), ContentEncoding::Brotli).unwrap();
|
||||
assert_eq!(result.data, original);
|
||||
assert_eq!(result.compressed_size, compressed.len() as u64);
|
||||
assert_eq!(result.decompressed_size, original.len() as u64);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_decompress_zstd() {
|
||||
// Compress some data with zstd
|
||||
let original = b"hello world, this is a test of zstd compression";
|
||||
let compressed = zstd::stream::encode_all(std::io::Cursor::new(original), 3).unwrap();
|
||||
|
||||
let result = decompress(compressed.clone(), ContentEncoding::Zstd).unwrap();
|
||||
assert_eq!(result.data, original);
|
||||
assert_eq!(result.compressed_size, compressed.len() as u64);
|
||||
assert_eq!(result.decompressed_size, original.len() as u64);
|
||||
}
|
||||
}
|
||||
@@ -1,186 +0,0 @@
|
||||
use crate::sender::HttpResponseEvent;
|
||||
use hyper_util::client::legacy::connect::dns::{
|
||||
GaiResolver as HyperGaiResolver, Name as HyperName,
|
||||
};
|
||||
use log::info;
|
||||
use reqwest::dns::{Addrs, Name, Resolve, Resolving};
|
||||
use std::collections::HashMap;
|
||||
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr};
|
||||
use std::str::FromStr;
|
||||
use std::sync::Arc;
|
||||
use std::time::Instant;
|
||||
use tokio::sync::{RwLock, mpsc};
|
||||
use tower_service::Service;
|
||||
use yaak_models::models::DnsOverride;
|
||||
|
||||
/// Stores resolved addresses for a hostname override
|
||||
#[derive(Clone)]
|
||||
pub struct ResolvedOverride {
|
||||
pub ipv4: Vec<Ipv4Addr>,
|
||||
pub ipv6: Vec<Ipv6Addr>,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct LocalhostResolver {
|
||||
fallback: HyperGaiResolver,
|
||||
event_tx: Arc<RwLock<Option<mpsc::Sender<HttpResponseEvent>>>>,
|
||||
overrides: Arc<HashMap<String, ResolvedOverride>>,
|
||||
}
|
||||
|
||||
impl LocalhostResolver {
|
||||
pub fn new(dns_overrides: Vec<DnsOverride>) -> Arc<Self> {
|
||||
let resolver = HyperGaiResolver::new();
|
||||
|
||||
// Pre-parse DNS overrides into a lookup map
|
||||
let mut overrides = HashMap::new();
|
||||
for o in dns_overrides {
|
||||
if !o.enabled {
|
||||
continue;
|
||||
}
|
||||
let hostname = o.hostname.to_lowercase();
|
||||
|
||||
let ipv4: Vec<Ipv4Addr> =
|
||||
o.ipv4.iter().filter_map(|s| s.parse::<Ipv4Addr>().ok()).collect();
|
||||
|
||||
let ipv6: Vec<Ipv6Addr> =
|
||||
o.ipv6.iter().filter_map(|s| s.parse::<Ipv6Addr>().ok()).collect();
|
||||
|
||||
// Only add if at least one address is valid
|
||||
if !ipv4.is_empty() || !ipv6.is_empty() {
|
||||
overrides.insert(hostname, ResolvedOverride { ipv4, ipv6 });
|
||||
}
|
||||
}
|
||||
|
||||
Arc::new(Self {
|
||||
fallback: resolver,
|
||||
event_tx: Arc::new(RwLock::new(None)),
|
||||
overrides: Arc::new(overrides),
|
||||
})
|
||||
}
|
||||
|
||||
/// Set the event sender for the current request.
|
||||
/// This should be called before each request to direct DNS events
|
||||
/// to the appropriate channel.
|
||||
pub async fn set_event_sender(&self, tx: Option<mpsc::Sender<HttpResponseEvent>>) {
|
||||
let mut guard = self.event_tx.write().await;
|
||||
*guard = tx;
|
||||
}
|
||||
}
|
||||
|
||||
impl Resolve for LocalhostResolver {
|
||||
fn resolve(&self, name: Name) -> Resolving {
|
||||
let host = name.as_str().to_lowercase();
|
||||
let event_tx = self.event_tx.clone();
|
||||
let overrides = self.overrides.clone();
|
||||
|
||||
info!("DNS resolve called for: {}", host);
|
||||
|
||||
// Check for DNS override first
|
||||
if let Some(resolved) = overrides.get(&host) {
|
||||
log::debug!("DNS override found for: {}", host);
|
||||
let hostname = host.clone();
|
||||
let mut addrs: Vec<SocketAddr> = Vec::new();
|
||||
|
||||
// Add IPv4 addresses
|
||||
for ip in &resolved.ipv4 {
|
||||
addrs.push(SocketAddr::new(IpAddr::V4(*ip), 0));
|
||||
}
|
||||
|
||||
// Add IPv6 addresses
|
||||
for ip in &resolved.ipv6 {
|
||||
addrs.push(SocketAddr::new(IpAddr::V6(*ip), 0));
|
||||
}
|
||||
|
||||
let addresses: Vec<String> = addrs.iter().map(|a| a.ip().to_string()).collect();
|
||||
|
||||
return Box::pin(async move {
|
||||
// Emit DNS event for override
|
||||
let guard = event_tx.read().await;
|
||||
if let Some(tx) = guard.as_ref() {
|
||||
let _ = tx
|
||||
.send(HttpResponseEvent::DnsResolved {
|
||||
hostname,
|
||||
addresses,
|
||||
duration: 0,
|
||||
overridden: true,
|
||||
})
|
||||
.await;
|
||||
}
|
||||
|
||||
Ok::<Addrs, Box<dyn std::error::Error + Send + Sync>>(Box::new(addrs.into_iter()))
|
||||
});
|
||||
}
|
||||
|
||||
// Check for .localhost suffix
|
||||
let is_localhost = host.ends_with(".localhost");
|
||||
if is_localhost {
|
||||
let hostname = host.clone();
|
||||
// Port 0 is fine; reqwest replaces it with the URL's explicit
|
||||
// port or the scheme's default (80/443, etc.).
|
||||
let addrs: Vec<SocketAddr> = vec![
|
||||
SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 0),
|
||||
SocketAddr::new(IpAddr::V6(Ipv6Addr::LOCALHOST), 0),
|
||||
];
|
||||
|
||||
let addresses: Vec<String> = addrs.iter().map(|a| a.ip().to_string()).collect();
|
||||
|
||||
return Box::pin(async move {
|
||||
// Emit DNS event for localhost resolution
|
||||
let guard = event_tx.read().await;
|
||||
if let Some(tx) = guard.as_ref() {
|
||||
let _ = tx
|
||||
.send(HttpResponseEvent::DnsResolved {
|
||||
hostname,
|
||||
addresses,
|
||||
duration: 0,
|
||||
overridden: false,
|
||||
})
|
||||
.await;
|
||||
}
|
||||
|
||||
Ok::<Addrs, Box<dyn std::error::Error + Send + Sync>>(Box::new(addrs.into_iter()))
|
||||
});
|
||||
}
|
||||
|
||||
// Fall back to system DNS
|
||||
let mut fallback = self.fallback.clone();
|
||||
let name_str = name.as_str().to_string();
|
||||
let hostname = host.clone();
|
||||
|
||||
Box::pin(async move {
|
||||
let start = Instant::now();
|
||||
|
||||
let result = match HyperName::from_str(&name_str) {
|
||||
Ok(n) => fallback.call(n).await,
|
||||
Err(e) => return Err(Box::new(e) as Box<dyn std::error::Error + Send + Sync>),
|
||||
};
|
||||
|
||||
let duration = start.elapsed().as_millis() as u64;
|
||||
|
||||
match result {
|
||||
Ok(addrs) => {
|
||||
// Collect addresses for event emission
|
||||
let addr_vec: Vec<SocketAddr> = addrs.collect();
|
||||
let addresses: Vec<String> =
|
||||
addr_vec.iter().map(|a| a.ip().to_string()).collect();
|
||||
|
||||
// Emit DNS event
|
||||
let guard = event_tx.read().await;
|
||||
if let Some(tx) = guard.as_ref() {
|
||||
let _ = tx
|
||||
.send(HttpResponseEvent::DnsResolved {
|
||||
hostname,
|
||||
addresses,
|
||||
duration,
|
||||
overridden: false,
|
||||
})
|
||||
.await;
|
||||
}
|
||||
|
||||
Ok(Box::new(addr_vec.into_iter()) as Addrs)
|
||||
}
|
||||
Err(err) => Err(Box::new(err) as Box<dyn std::error::Error + Send + Sync>),
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
use serde::{Serialize, Serializer};
|
||||
use thiserror::Error;
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
pub enum Error {
|
||||
#[error("Client error: {0:?}")]
|
||||
Client(#[from] reqwest::Error),
|
||||
|
||||
#[error(transparent)]
|
||||
TlsError(#[from] yaak_tls::error::Error),
|
||||
|
||||
#[error("Request failed with {0:?}")]
|
||||
RequestError(String),
|
||||
|
||||
#[error("Request canceled")]
|
||||
RequestCanceledError,
|
||||
|
||||
#[error("Timeout of {0:?} reached")]
|
||||
RequestTimeout(std::time::Duration),
|
||||
|
||||
#[error("Decompression error: {0}")]
|
||||
DecompressionError(String),
|
||||
|
||||
#[error("Failed to read response body: {0}")]
|
||||
BodyReadError(String),
|
||||
}
|
||||
|
||||
impl Serialize for Error {
|
||||
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
|
||||
where
|
||||
S: Serializer,
|
||||
{
|
||||
serializer.serialize_str(self.to_string().as_ref())
|
||||
}
|
||||
}
|
||||
|
||||
pub type Result<T> = std::result::Result<T, Error>;
|
||||
@@ -1,13 +0,0 @@
|
||||
mod chained_reader;
|
||||
pub mod client;
|
||||
pub mod cookies;
|
||||
pub mod decompress;
|
||||
pub mod dns;
|
||||
pub mod error;
|
||||
pub mod manager;
|
||||
pub mod path_placeholders;
|
||||
mod proto;
|
||||
pub mod sender;
|
||||
pub mod tee_reader;
|
||||
pub mod transaction;
|
||||
pub mod types;
|
||||
@@ -1,53 +0,0 @@
|
||||
use crate::client::HttpConnectionOptions;
|
||||
use crate::dns::LocalhostResolver;
|
||||
use crate::error::Result;
|
||||
use log::info;
|
||||
use reqwest::Client;
|
||||
use std::collections::BTreeMap;
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, Instant};
|
||||
use tokio::sync::RwLock;
|
||||
|
||||
/// A cached HTTP client along with its DNS resolver.
|
||||
/// The resolver is needed to set the event sender per-request.
|
||||
pub struct CachedClient {
|
||||
pub client: Client,
|
||||
pub resolver: Arc<LocalhostResolver>,
|
||||
}
|
||||
|
||||
pub struct HttpConnectionManager {
|
||||
connections: Arc<RwLock<BTreeMap<String, (CachedClient, Instant)>>>,
|
||||
ttl: Duration,
|
||||
}
|
||||
|
||||
impl HttpConnectionManager {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
connections: Arc::new(RwLock::new(BTreeMap::new())),
|
||||
ttl: Duration::from_secs(10 * 60),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn get_client(&self, opt: &HttpConnectionOptions) -> Result<CachedClient> {
|
||||
let mut connections = self.connections.write().await;
|
||||
let id = opt.id.clone();
|
||||
|
||||
// Clean old connections
|
||||
connections.retain(|_, (_, last_used)| last_used.elapsed() <= self.ttl);
|
||||
|
||||
if let Some((cached, last_used)) = connections.get_mut(&id) {
|
||||
info!("Re-using HTTP client {id}");
|
||||
*last_used = Instant::now();
|
||||
return Ok(CachedClient {
|
||||
client: cached.client.clone(),
|
||||
resolver: cached.resolver.clone(),
|
||||
});
|
||||
}
|
||||
|
||||
let (client, resolver) = opt.build_client()?;
|
||||
let cached = CachedClient { client: client.clone(), resolver: resolver.clone() };
|
||||
connections.insert(id.into(), (cached, Instant::now()));
|
||||
|
||||
Ok(CachedClient { client, resolver })
|
||||
}
|
||||
}
|
||||
@@ -1,29 +0,0 @@
|
||||
use reqwest::Url;
|
||||
use std::str::FromStr;
|
||||
|
||||
pub(crate) fn ensure_proto(url_str: &str) -> String {
|
||||
if url_str.is_empty() {
|
||||
return "".to_string();
|
||||
}
|
||||
|
||||
if url_str.starts_with("http://") || url_str.starts_with("https://") {
|
||||
return url_str.to_string();
|
||||
}
|
||||
|
||||
// Url::from_str will fail without a proto, so add one
|
||||
let parseable_url = format!("http://{}", url_str);
|
||||
if let Ok(u) = Url::from_str(parseable_url.as_str()) {
|
||||
match u.host() {
|
||||
Some(host) => {
|
||||
let h = host.to_string();
|
||||
// These TLDs force HTTPS
|
||||
if h.ends_with(".app") || h.ends_with(".dev") || h.ends_with(".page") {
|
||||
return format!("https://{url_str}");
|
||||
}
|
||||
}
|
||||
None => {}
|
||||
}
|
||||
}
|
||||
|
||||
format!("http://{url_str}")
|
||||
}
|
||||
@@ -1,532 +0,0 @@
|
||||
use crate::decompress::{ContentEncoding, streaming_decoder};
|
||||
use crate::error::{Error, Result};
|
||||
use crate::types::{SendableBody, SendableHttpRequest};
|
||||
use async_trait::async_trait;
|
||||
use futures_util::StreamExt;
|
||||
use reqwest::{Client, Method, Version};
|
||||
use std::fmt::Display;
|
||||
use std::pin::Pin;
|
||||
use std::task::{Context, Poll};
|
||||
use std::time::Duration;
|
||||
use tokio::io::{AsyncRead, AsyncReadExt, BufReader, ReadBuf};
|
||||
use tokio::sync::mpsc;
|
||||
use tokio_util::io::StreamReader;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum RedirectBehavior {
|
||||
/// 307/308: Method and body are preserved
|
||||
Preserve,
|
||||
/// 303 or 301/302 with POST: Method changed to GET, body dropped
|
||||
DropBody,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum HttpResponseEvent {
|
||||
Setting(String, String),
|
||||
Info(String),
|
||||
Redirect {
|
||||
url: String,
|
||||
status: u16,
|
||||
behavior: RedirectBehavior,
|
||||
},
|
||||
SendUrl {
|
||||
method: String,
|
||||
scheme: String,
|
||||
username: String,
|
||||
password: String,
|
||||
host: String,
|
||||
port: u16,
|
||||
path: String,
|
||||
query: String,
|
||||
fragment: String,
|
||||
},
|
||||
ReceiveUrl {
|
||||
version: Version,
|
||||
status: String,
|
||||
},
|
||||
HeaderUp(String, String),
|
||||
HeaderDown(String, String),
|
||||
ChunkSent {
|
||||
bytes: usize,
|
||||
},
|
||||
ChunkReceived {
|
||||
bytes: usize,
|
||||
},
|
||||
DnsResolved {
|
||||
hostname: String,
|
||||
addresses: Vec<String>,
|
||||
duration: u64,
|
||||
overridden: bool,
|
||||
},
|
||||
}
|
||||
|
||||
impl Display for HttpResponseEvent {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
HttpResponseEvent::Setting(name, value) => write!(f, "* Setting {}={}", name, value),
|
||||
HttpResponseEvent::Info(s) => write!(f, "* {}", s),
|
||||
HttpResponseEvent::Redirect { url, status, behavior } => {
|
||||
let behavior_str = match behavior {
|
||||
RedirectBehavior::Preserve => "preserve",
|
||||
RedirectBehavior::DropBody => "drop body",
|
||||
};
|
||||
write!(f, "* Redirect {} -> {} ({})", status, url, behavior_str)
|
||||
}
|
||||
HttpResponseEvent::SendUrl { method, scheme, username, password, host, port, path, query, fragment } => {
|
||||
let auth_str = if username.is_empty() && password.is_empty() {
|
||||
String::new()
|
||||
} else {
|
||||
format!("{}:{}@", username, password)
|
||||
};
|
||||
let query_str = if query.is_empty() { String::new() } else { format!("?{}", query) };
|
||||
let fragment_str = if fragment.is_empty() { String::new() } else { format!("#{}", fragment) };
|
||||
write!(f, "> {} {}://{}{}:{}{}{}{}", method, scheme, auth_str, host, port, path, query_str, fragment_str)
|
||||
}
|
||||
HttpResponseEvent::ReceiveUrl { version, status } => {
|
||||
write!(f, "< {} {}", version_to_str(version), status)
|
||||
}
|
||||
HttpResponseEvent::HeaderUp(name, value) => write!(f, "> {}: {}", name, value),
|
||||
HttpResponseEvent::HeaderDown(name, value) => write!(f, "< {}: {}", name, value),
|
||||
HttpResponseEvent::ChunkSent { bytes } => write!(f, "> [{} bytes sent]", bytes),
|
||||
HttpResponseEvent::ChunkReceived { bytes } => write!(f, "< [{} bytes received]", bytes),
|
||||
HttpResponseEvent::DnsResolved { hostname, addresses, duration, overridden } => {
|
||||
if *overridden {
|
||||
write!(f, "* DNS override {} -> {}", hostname, addresses.join(", "))
|
||||
} else {
|
||||
write!(
|
||||
f,
|
||||
"* DNS resolved {} to {} ({}ms)",
|
||||
hostname,
|
||||
addresses.join(", "),
|
||||
duration
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<HttpResponseEvent> for yaak_models::models::HttpResponseEventData {
|
||||
fn from(event: HttpResponseEvent) -> Self {
|
||||
use yaak_models::models::HttpResponseEventData as D;
|
||||
match event {
|
||||
HttpResponseEvent::Setting(name, value) => D::Setting { name, value },
|
||||
HttpResponseEvent::Info(message) => D::Info { message },
|
||||
HttpResponseEvent::Redirect { url, status, behavior } => D::Redirect {
|
||||
url,
|
||||
status,
|
||||
behavior: match behavior {
|
||||
RedirectBehavior::Preserve => "preserve".to_string(),
|
||||
RedirectBehavior::DropBody => "drop_body".to_string(),
|
||||
},
|
||||
},
|
||||
HttpResponseEvent::SendUrl { method, scheme, username, password, host, port, path, query, fragment } => {
|
||||
D::SendUrl { method, scheme, username, password, host, port, path, query, fragment }
|
||||
}
|
||||
HttpResponseEvent::ReceiveUrl { version, status } => {
|
||||
D::ReceiveUrl { version: format!("{:?}", version), status }
|
||||
}
|
||||
HttpResponseEvent::HeaderUp(name, value) => D::HeaderUp { name, value },
|
||||
HttpResponseEvent::HeaderDown(name, value) => D::HeaderDown { name, value },
|
||||
HttpResponseEvent::ChunkSent { bytes } => D::ChunkSent { bytes },
|
||||
HttpResponseEvent::ChunkReceived { bytes } => D::ChunkReceived { bytes },
|
||||
HttpResponseEvent::DnsResolved { hostname, addresses, duration, overridden } => {
|
||||
D::DnsResolved { hostname, addresses, duration, overridden }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Statistics about the body after consumption
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct BodyStats {
|
||||
/// Size of the body as received over the wire (before decompression)
|
||||
pub size_compressed: u64,
|
||||
/// Size of the body after decompression
|
||||
pub size_decompressed: u64,
|
||||
}
|
||||
|
||||
/// An AsyncRead wrapper that sends chunk events as data is read
|
||||
pub struct TrackingRead<R> {
|
||||
inner: R,
|
||||
event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
ended: bool,
|
||||
}
|
||||
|
||||
impl<R> TrackingRead<R> {
|
||||
pub fn new(inner: R, event_tx: mpsc::Sender<HttpResponseEvent>) -> Self {
|
||||
Self { inner, event_tx, ended: false }
|
||||
}
|
||||
}
|
||||
|
||||
impl<R: AsyncRead + Unpin> AsyncRead for TrackingRead<R> {
|
||||
fn poll_read(
|
||||
mut self: Pin<&mut Self>,
|
||||
cx: &mut Context<'_>,
|
||||
buf: &mut ReadBuf<'_>,
|
||||
) -> Poll<std::io::Result<()>> {
|
||||
let before = buf.filled().len();
|
||||
let result = Pin::new(&mut self.inner).poll_read(cx, buf);
|
||||
if let Poll::Ready(Ok(())) = &result {
|
||||
let bytes_read = buf.filled().len() - before;
|
||||
if bytes_read > 0 {
|
||||
// Ignore send errors - receiver may have been dropped or channel is full
|
||||
let _ =
|
||||
self.event_tx.try_send(HttpResponseEvent::ChunkReceived { bytes: bytes_read });
|
||||
} else if !self.ended {
|
||||
self.ended = true;
|
||||
}
|
||||
}
|
||||
result
|
||||
}
|
||||
}
|
||||
|
||||
/// Type alias for the body stream
|
||||
type BodyStream = Pin<Box<dyn AsyncRead + Send>>;
|
||||
|
||||
/// HTTP response with deferred body consumption.
|
||||
/// Headers are available immediately after send(), body can be consumed in different ways.
|
||||
/// Note: Debug is manually implemented since BodyStream doesn't implement Debug.
|
||||
pub struct HttpResponse {
|
||||
/// HTTP status code
|
||||
pub status: u16,
|
||||
/// HTTP status reason phrase (e.g., "OK", "Not Found")
|
||||
pub status_reason: Option<String>,
|
||||
/// Response headers (Vec to support multiple headers with same name, e.g., Set-Cookie)
|
||||
pub headers: Vec<(String, String)>,
|
||||
/// Request headers (Vec to support multiple headers with same name)
|
||||
pub request_headers: Vec<(String, String)>,
|
||||
/// Content-Length from headers (may differ from actual body size)
|
||||
pub content_length: Option<u64>,
|
||||
/// Final URL (after redirects)
|
||||
pub url: String,
|
||||
/// Remote address of the server
|
||||
pub remote_addr: Option<String>,
|
||||
/// HTTP version (e.g., "HTTP/1.1", "HTTP/2")
|
||||
pub version: Option<String>,
|
||||
|
||||
/// The body stream (consumed when calling bytes(), text(), write_to_file(), or drain())
|
||||
body_stream: Option<BodyStream>,
|
||||
/// Content-Encoding for decompression
|
||||
encoding: ContentEncoding,
|
||||
}
|
||||
|
||||
impl std::fmt::Debug for HttpResponse {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.debug_struct("HttpResponse")
|
||||
.field("status", &self.status)
|
||||
.field("status_reason", &self.status_reason)
|
||||
.field("headers", &self.headers)
|
||||
.field("content_length", &self.content_length)
|
||||
.field("url", &self.url)
|
||||
.field("remote_addr", &self.remote_addr)
|
||||
.field("version", &self.version)
|
||||
.field("body_stream", &"<stream>")
|
||||
.field("encoding", &self.encoding)
|
||||
.finish()
|
||||
}
|
||||
}
|
||||
|
||||
impl HttpResponse {
|
||||
/// Create a new HttpResponse with an unconsumed body stream
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub fn new(
|
||||
status: u16,
|
||||
status_reason: Option<String>,
|
||||
headers: Vec<(String, String)>,
|
||||
request_headers: Vec<(String, String)>,
|
||||
content_length: Option<u64>,
|
||||
url: String,
|
||||
remote_addr: Option<String>,
|
||||
version: Option<String>,
|
||||
body_stream: BodyStream,
|
||||
encoding: ContentEncoding,
|
||||
) -> Self {
|
||||
Self {
|
||||
status,
|
||||
status_reason,
|
||||
headers,
|
||||
request_headers,
|
||||
content_length,
|
||||
url,
|
||||
remote_addr,
|
||||
version,
|
||||
body_stream: Some(body_stream),
|
||||
encoding,
|
||||
}
|
||||
}
|
||||
|
||||
/// Consume the body and return it as bytes (loads entire body into memory).
|
||||
/// Also decompresses the body if Content-Encoding is set.
|
||||
pub async fn bytes(mut self) -> Result<(Vec<u8>, BodyStats)> {
|
||||
let stream = self.body_stream.take().ok_or_else(|| {
|
||||
Error::RequestError("Response body has already been consumed".to_string())
|
||||
})?;
|
||||
|
||||
let buf_reader = BufReader::new(stream);
|
||||
let mut decoder = streaming_decoder(buf_reader, self.encoding);
|
||||
|
||||
let mut decompressed = Vec::new();
|
||||
let mut bytes_read = 0u64;
|
||||
|
||||
// Read through the decoder in chunks to track compressed size
|
||||
let mut buf = [0u8; 8192];
|
||||
loop {
|
||||
match decoder.read(&mut buf).await {
|
||||
Ok(0) => break,
|
||||
Ok(n) => {
|
||||
decompressed.extend_from_slice(&buf[..n]);
|
||||
bytes_read += n as u64;
|
||||
}
|
||||
Err(e) => {
|
||||
return Err(Error::BodyReadError(e.to_string()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let stats = BodyStats {
|
||||
// For now, we can't easily track compressed size when streaming through decoder
|
||||
// Use content_length as an approximation, or decompressed size if identity encoding
|
||||
size_compressed: self.content_length.unwrap_or(bytes_read),
|
||||
size_decompressed: decompressed.len() as u64,
|
||||
};
|
||||
|
||||
Ok((decompressed, stats))
|
||||
}
|
||||
|
||||
/// Consume the body and return it as a UTF-8 string.
|
||||
pub async fn text(self) -> Result<(String, BodyStats)> {
|
||||
let (bytes, stats) = self.bytes().await?;
|
||||
let text = String::from_utf8(bytes)
|
||||
.map_err(|e| Error::RequestError(format!("Response is not valid UTF-8: {}", e)))?;
|
||||
Ok((text, stats))
|
||||
}
|
||||
|
||||
/// Take the body stream for manual consumption.
|
||||
/// Returns an AsyncRead that decompresses on-the-fly if Content-Encoding is set.
|
||||
/// The caller is responsible for reading and processing the stream.
|
||||
pub fn into_body_stream(&mut self) -> Result<Box<dyn AsyncRead + Unpin + Send>> {
|
||||
let stream = self.body_stream.take().ok_or_else(|| {
|
||||
Error::RequestError("Response body has already been consumed".to_string())
|
||||
})?;
|
||||
|
||||
let buf_reader = BufReader::new(stream);
|
||||
let decoder = streaming_decoder(buf_reader, self.encoding);
|
||||
|
||||
Ok(decoder)
|
||||
}
|
||||
|
||||
/// Discard the body without reading it (useful for redirects).
|
||||
pub async fn drain(mut self) -> Result<()> {
|
||||
let stream = self.body_stream.take().ok_or_else(|| {
|
||||
Error::RequestError("Response body has already been consumed".to_string())
|
||||
})?;
|
||||
|
||||
// Just read and discard all bytes
|
||||
let mut reader = stream;
|
||||
let mut buf = [0u8; 8192];
|
||||
loop {
|
||||
match reader.read(&mut buf).await {
|
||||
Ok(0) => break,
|
||||
Ok(_) => continue,
|
||||
Err(e) => {
|
||||
return Err(Error::RequestError(format!(
|
||||
"Failed to drain response body: {}",
|
||||
e
|
||||
)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Trait for sending HTTP requests
|
||||
#[async_trait]
|
||||
pub trait HttpSender: Send + Sync {
|
||||
/// Send an HTTP request and return the response with headers.
|
||||
/// The body is not consumed until you call bytes(), text(), write_to_file(), or drain().
|
||||
/// Events are sent through the provided channel.
|
||||
async fn send(
|
||||
&self,
|
||||
request: SendableHttpRequest,
|
||||
event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
) -> Result<HttpResponse>;
|
||||
}
|
||||
|
||||
/// Reqwest-based implementation of HttpSender
|
||||
pub struct ReqwestSender {
|
||||
client: Client,
|
||||
}
|
||||
|
||||
impl ReqwestSender {
|
||||
/// Create a new ReqwestSender with a default client
|
||||
pub fn new() -> Result<Self> {
|
||||
let client = Client::builder().build().map_err(Error::Client)?;
|
||||
Ok(Self { client })
|
||||
}
|
||||
|
||||
/// Create a new ReqwestSender with a custom client
|
||||
pub fn with_client(client: Client) -> Self {
|
||||
Self { client }
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl HttpSender for ReqwestSender {
|
||||
async fn send(
|
||||
&self,
|
||||
request: SendableHttpRequest,
|
||||
event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
) -> Result<HttpResponse> {
|
||||
// Helper to send events (ignores errors if receiver is dropped or channel is full)
|
||||
let send_event = |event: HttpResponseEvent| {
|
||||
let _ = event_tx.try_send(event);
|
||||
};
|
||||
|
||||
// Parse the HTTP method
|
||||
let method = Method::from_bytes(request.method.as_bytes())
|
||||
.map_err(|e| Error::RequestError(format!("Invalid HTTP method: {}", e)))?;
|
||||
|
||||
// Build the request
|
||||
let mut req_builder = self.client.request(method, &request.url);
|
||||
|
||||
// Add headers
|
||||
for header in request.headers {
|
||||
if header.0.is_empty() {
|
||||
continue;
|
||||
}
|
||||
req_builder = req_builder.header(&header.0, &header.1);
|
||||
}
|
||||
|
||||
// Configure timeout
|
||||
if let Some(d) = request.options.timeout
|
||||
&& !d.is_zero()
|
||||
{
|
||||
req_builder = req_builder.timeout(d);
|
||||
}
|
||||
|
||||
// Add body
|
||||
match request.body {
|
||||
None => {}
|
||||
Some(SendableBody::Bytes(bytes)) => {
|
||||
req_builder = req_builder.body(bytes);
|
||||
}
|
||||
Some(SendableBody::Stream(stream)) => {
|
||||
// Convert AsyncRead stream to reqwest Body
|
||||
let stream = tokio_util::io::ReaderStream::new(stream);
|
||||
let body = reqwest::Body::wrap_stream(stream);
|
||||
req_builder = req_builder.body(body);
|
||||
}
|
||||
}
|
||||
|
||||
// Send the request
|
||||
let sendable_req = req_builder.build()?;
|
||||
send_event(HttpResponseEvent::Setting(
|
||||
"timeout".to_string(),
|
||||
if request.options.timeout.unwrap_or_default().is_zero() {
|
||||
"Infinity".to_string()
|
||||
} else {
|
||||
format!("{:?}", request.options.timeout)
|
||||
},
|
||||
));
|
||||
|
||||
send_event(HttpResponseEvent::SendUrl {
|
||||
method: sendable_req.method().to_string(),
|
||||
scheme: sendable_req.url().scheme().to_string(),
|
||||
username: sendable_req.url().username().to_string(),
|
||||
password: sendable_req.url().password().unwrap_or_default().to_string(),
|
||||
host: sendable_req.url().host_str().unwrap_or_default().to_string(),
|
||||
port: sendable_req.url().port_or_known_default().unwrap_or(0),
|
||||
path: sendable_req.url().path().to_string(),
|
||||
query: sendable_req.url().query().unwrap_or_default().to_string(),
|
||||
fragment: sendable_req.url().fragment().unwrap_or_default().to_string(),
|
||||
});
|
||||
|
||||
let mut request_headers = Vec::new();
|
||||
for (name, value) in sendable_req.headers() {
|
||||
let v = value.to_str().unwrap_or_default().to_string();
|
||||
request_headers.push((name.to_string(), v.clone()));
|
||||
send_event(HttpResponseEvent::HeaderUp(name.to_string(), v));
|
||||
}
|
||||
send_event(HttpResponseEvent::Info("Sending request to server".to_string()));
|
||||
|
||||
// Map some errors to our own, so they look nicer
|
||||
let response = self.client.execute(sendable_req).await.map_err(|e| {
|
||||
if reqwest::Error::is_timeout(&e) {
|
||||
Error::RequestTimeout(
|
||||
request.options.timeout.unwrap_or(Duration::from_secs(0)).clone(),
|
||||
)
|
||||
} else {
|
||||
Error::Client(e)
|
||||
}
|
||||
})?;
|
||||
|
||||
let status = response.status().as_u16();
|
||||
let status_reason = response.status().canonical_reason().map(|s| s.to_string());
|
||||
let url = response.url().to_string();
|
||||
let remote_addr = response.remote_addr().map(|a| a.to_string());
|
||||
let version = Some(version_to_str(&response.version()));
|
||||
let content_length = response.content_length();
|
||||
|
||||
send_event(HttpResponseEvent::ReceiveUrl {
|
||||
version: response.version(),
|
||||
status: response.status().to_string(),
|
||||
});
|
||||
|
||||
// Extract headers (use Vec to preserve duplicates like Set-Cookie)
|
||||
let mut headers = Vec::new();
|
||||
for (key, value) in response.headers() {
|
||||
if let Ok(v) = value.to_str() {
|
||||
send_event(HttpResponseEvent::HeaderDown(key.to_string(), v.to_string()));
|
||||
headers.push((key.to_string(), v.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
// Determine content encoding for decompression
|
||||
// HTTP headers are case-insensitive, so we need to search for any casing
|
||||
let encoding = ContentEncoding::from_header(
|
||||
headers
|
||||
.iter()
|
||||
.find(|(k, _)| k.eq_ignore_ascii_case("content-encoding"))
|
||||
.map(|(_, v)| v.as_str()),
|
||||
);
|
||||
|
||||
// Get the byte stream instead of loading into memory
|
||||
let byte_stream = response.bytes_stream();
|
||||
|
||||
// Convert the stream to an AsyncRead
|
||||
let stream_reader = StreamReader::new(
|
||||
byte_stream.map(|result| result.map_err(|e| std::io::Error::other(e))),
|
||||
);
|
||||
|
||||
// Wrap the stream with tracking to emit chunk received events via the same channel
|
||||
let tracking_reader = TrackingRead::new(stream_reader, event_tx);
|
||||
let body_stream: BodyStream = Box::pin(tracking_reader);
|
||||
|
||||
Ok(HttpResponse::new(
|
||||
status,
|
||||
status_reason,
|
||||
headers,
|
||||
request_headers,
|
||||
content_length,
|
||||
url,
|
||||
remote_addr,
|
||||
version,
|
||||
body_stream,
|
||||
encoding,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
fn version_to_str(version: &Version) -> String {
|
||||
match *version {
|
||||
Version::HTTP_09 => "HTTP/0.9".to_string(),
|
||||
Version::HTTP_10 => "HTTP/1.0".to_string(),
|
||||
Version::HTTP_11 => "HTTP/1.1".to_string(),
|
||||
Version::HTTP_2 => "HTTP/2".to_string(),
|
||||
Version::HTTP_3 => "HTTP/3".to_string(),
|
||||
_ => "unknown".to_string(),
|
||||
}
|
||||
}
|
||||
@@ -1,159 +0,0 @@
|
||||
use std::io;
|
||||
use std::pin::Pin;
|
||||
use std::task::{Context, Poll};
|
||||
use tokio::io::{AsyncRead, ReadBuf};
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
/// A reader that forwards all read data to a channel while also returning it to the caller.
|
||||
/// This allows capturing request body data as it's being sent.
|
||||
/// Uses an unbounded channel to ensure all data is captured without blocking the request.
|
||||
pub struct TeeReader<R> {
|
||||
inner: R,
|
||||
tx: mpsc::UnboundedSender<Vec<u8>>,
|
||||
}
|
||||
|
||||
impl<R> TeeReader<R> {
|
||||
pub fn new(inner: R, tx: mpsc::UnboundedSender<Vec<u8>>) -> Self {
|
||||
Self { inner, tx }
|
||||
}
|
||||
}
|
||||
|
||||
impl<R: AsyncRead + Unpin> AsyncRead for TeeReader<R> {
|
||||
fn poll_read(
|
||||
mut self: Pin<&mut Self>,
|
||||
cx: &mut Context<'_>,
|
||||
buf: &mut ReadBuf<'_>,
|
||||
) -> Poll<io::Result<()>> {
|
||||
let before_len = buf.filled().len();
|
||||
|
||||
match Pin::new(&mut self.inner).poll_read(cx, buf) {
|
||||
Poll::Ready(Ok(())) => {
|
||||
let after_len = buf.filled().len();
|
||||
if after_len > before_len {
|
||||
// Data was read, send a copy to the channel
|
||||
let data = buf.filled()[before_len..after_len].to_vec();
|
||||
// Send to unbounded channel - this never blocks
|
||||
// Ignore error if receiver is closed
|
||||
let _ = self.tx.send(data);
|
||||
}
|
||||
Poll::Ready(Ok(()))
|
||||
}
|
||||
Poll::Ready(Err(e)) => Poll::Ready(Err(e)),
|
||||
Poll::Pending => Poll::Pending,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::io::Cursor;
|
||||
use tokio::io::AsyncReadExt;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_tee_reader_captures_all_data() {
|
||||
let data = b"Hello, World!";
|
||||
let cursor = Cursor::new(data.to_vec());
|
||||
let (tx, mut rx) = mpsc::unbounded_channel();
|
||||
|
||||
let mut tee = TeeReader::new(cursor, tx);
|
||||
let mut output = Vec::new();
|
||||
tee.read_to_end(&mut output).await.unwrap();
|
||||
|
||||
// Verify the reader returns the correct data
|
||||
assert_eq!(output, data);
|
||||
|
||||
// Verify the channel received the data
|
||||
let mut captured = Vec::new();
|
||||
while let Ok(chunk) = rx.try_recv() {
|
||||
captured.extend(chunk);
|
||||
}
|
||||
assert_eq!(captured, data);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_tee_reader_with_chunked_reads() {
|
||||
let data = b"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
|
||||
let cursor = Cursor::new(data.to_vec());
|
||||
let (tx, mut rx) = mpsc::unbounded_channel();
|
||||
|
||||
let mut tee = TeeReader::new(cursor, tx);
|
||||
|
||||
// Read in small chunks
|
||||
let mut buf = [0u8; 5];
|
||||
let mut output = Vec::new();
|
||||
loop {
|
||||
let n = tee.read(&mut buf).await.unwrap();
|
||||
if n == 0 {
|
||||
break;
|
||||
}
|
||||
output.extend_from_slice(&buf[..n]);
|
||||
}
|
||||
|
||||
// Verify the reader returns the correct data
|
||||
assert_eq!(output, data);
|
||||
|
||||
// Verify the channel received all chunks
|
||||
let mut captured = Vec::new();
|
||||
while let Ok(chunk) = rx.try_recv() {
|
||||
captured.extend(chunk);
|
||||
}
|
||||
assert_eq!(captured, data);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_tee_reader_empty_data() {
|
||||
let data: Vec<u8> = vec![];
|
||||
let cursor = Cursor::new(data.clone());
|
||||
let (tx, mut rx) = mpsc::unbounded_channel();
|
||||
|
||||
let mut tee = TeeReader::new(cursor, tx);
|
||||
let mut output = Vec::new();
|
||||
tee.read_to_end(&mut output).await.unwrap();
|
||||
|
||||
// Verify empty output
|
||||
assert!(output.is_empty());
|
||||
|
||||
// Verify no data was sent to channel
|
||||
assert!(rx.try_recv().is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_tee_reader_works_when_receiver_dropped() {
|
||||
let data = b"Hello, World!";
|
||||
let cursor = Cursor::new(data.to_vec());
|
||||
let (tx, rx) = mpsc::unbounded_channel();
|
||||
|
||||
// Drop the receiver before reading
|
||||
drop(rx);
|
||||
|
||||
let mut tee = TeeReader::new(cursor, tx);
|
||||
let mut output = Vec::new();
|
||||
|
||||
// Should still work even though receiver is dropped
|
||||
tee.read_to_end(&mut output).await.unwrap();
|
||||
assert_eq!(output, data);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_tee_reader_large_data() {
|
||||
// Test with 1MB of data
|
||||
let data: Vec<u8> = (0..1024 * 1024).map(|i| (i % 256) as u8).collect();
|
||||
let cursor = Cursor::new(data.clone());
|
||||
let (tx, mut rx) = mpsc::unbounded_channel();
|
||||
|
||||
let mut tee = TeeReader::new(cursor, tx);
|
||||
let mut output = Vec::new();
|
||||
tee.read_to_end(&mut output).await.unwrap();
|
||||
|
||||
// Verify the reader returns the correct data
|
||||
assert_eq!(output, data);
|
||||
|
||||
// Verify the channel received all data
|
||||
let mut captured = Vec::new();
|
||||
while let Ok(chunk) = rx.try_recv() {
|
||||
captured.extend(chunk);
|
||||
}
|
||||
assert_eq!(captured, data);
|
||||
}
|
||||
}
|
||||
@@ -1,729 +0,0 @@
|
||||
use crate::cookies::CookieStore;
|
||||
use crate::error::Result;
|
||||
use crate::sender::{HttpResponse, HttpResponseEvent, HttpSender, RedirectBehavior};
|
||||
use crate::types::SendableHttpRequest;
|
||||
use log::debug;
|
||||
use tokio::sync::mpsc;
|
||||
use tokio::sync::watch::Receiver;
|
||||
use url::Url;
|
||||
|
||||
/// HTTP Transaction that manages the lifecycle of a request, including redirect handling
|
||||
pub struct HttpTransaction<S: HttpSender> {
|
||||
sender: S,
|
||||
max_redirects: usize,
|
||||
cookie_store: Option<CookieStore>,
|
||||
}
|
||||
|
||||
impl<S: HttpSender> HttpTransaction<S> {
|
||||
/// Create a new transaction with default settings
|
||||
pub fn new(sender: S) -> Self {
|
||||
Self { sender, max_redirects: 10, cookie_store: None }
|
||||
}
|
||||
|
||||
/// Create a new transaction with custom max redirects
|
||||
pub fn with_max_redirects(sender: S, max_redirects: usize) -> Self {
|
||||
Self { sender, max_redirects, cookie_store: None }
|
||||
}
|
||||
|
||||
/// Create a new transaction with a cookie store
|
||||
pub fn with_cookie_store(sender: S, cookie_store: CookieStore) -> Self {
|
||||
Self { sender, max_redirects: 10, cookie_store: Some(cookie_store) }
|
||||
}
|
||||
|
||||
/// Create a new transaction with custom max redirects and a cookie store
|
||||
pub fn with_options(
|
||||
sender: S,
|
||||
max_redirects: usize,
|
||||
cookie_store: Option<CookieStore>,
|
||||
) -> Self {
|
||||
Self { sender, max_redirects, cookie_store }
|
||||
}
|
||||
|
||||
/// Execute the request with cancellation support.
|
||||
/// Returns an HttpResponse with unconsumed body - caller decides how to consume it.
|
||||
/// Events are sent through the provided channel.
|
||||
pub async fn execute_with_cancellation(
|
||||
&self,
|
||||
request: SendableHttpRequest,
|
||||
mut cancelled_rx: Receiver<bool>,
|
||||
event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
) -> Result<HttpResponse> {
|
||||
let mut redirect_count = 0;
|
||||
let mut current_url = request.url;
|
||||
let mut current_method = request.method;
|
||||
let mut current_headers = request.headers;
|
||||
let mut current_body = request.body;
|
||||
|
||||
// Helper to send events (ignores errors if receiver is dropped or channel is full)
|
||||
let send_event = |event: HttpResponseEvent| {
|
||||
let _ = event_tx.try_send(event);
|
||||
};
|
||||
|
||||
loop {
|
||||
// Check for cancellation before each request
|
||||
if *cancelled_rx.borrow() {
|
||||
return Err(crate::error::Error::RequestCanceledError);
|
||||
}
|
||||
|
||||
// Inject cookies into headers if we have a cookie store
|
||||
let headers_with_cookies = if let Some(cookie_store) = &self.cookie_store {
|
||||
let mut headers = current_headers.clone();
|
||||
if let Ok(url) = Url::parse(¤t_url) {
|
||||
if let Some(cookie_header) = cookie_store.get_cookie_header(&url) {
|
||||
debug!("Injecting Cookie header: {}", cookie_header);
|
||||
// Check if there's already a Cookie header and merge if so
|
||||
if let Some(existing) =
|
||||
headers.iter_mut().find(|h| h.0.eq_ignore_ascii_case("cookie"))
|
||||
{
|
||||
existing.1 = format!("{}; {}", existing.1, cookie_header);
|
||||
} else {
|
||||
headers.push(("Cookie".to_string(), cookie_header));
|
||||
}
|
||||
}
|
||||
}
|
||||
headers
|
||||
} else {
|
||||
current_headers.clone()
|
||||
};
|
||||
|
||||
// Build request for this iteration
|
||||
let req = SendableHttpRequest {
|
||||
url: current_url.clone(),
|
||||
method: current_method.clone(),
|
||||
headers: headers_with_cookies,
|
||||
body: current_body,
|
||||
options: request.options.clone(),
|
||||
};
|
||||
|
||||
// Send the request
|
||||
send_event(HttpResponseEvent::Setting(
|
||||
"redirects".to_string(),
|
||||
request.options.follow_redirects.to_string(),
|
||||
));
|
||||
|
||||
// Execute with cancellation support
|
||||
let response = tokio::select! {
|
||||
result = self.sender.send(req, event_tx.clone()) => result?,
|
||||
_ = cancelled_rx.changed() => {
|
||||
return Err(crate::error::Error::RequestCanceledError);
|
||||
}
|
||||
};
|
||||
|
||||
// Parse Set-Cookie headers and store cookies
|
||||
if let Some(cookie_store) = &self.cookie_store {
|
||||
if let Ok(url) = Url::parse(¤t_url) {
|
||||
let set_cookie_headers: Vec<String> = response
|
||||
.headers
|
||||
.iter()
|
||||
.filter(|(k, _)| k.eq_ignore_ascii_case("set-cookie"))
|
||||
.map(|(_, v)| v.clone())
|
||||
.collect();
|
||||
|
||||
if !set_cookie_headers.is_empty() {
|
||||
debug!("Storing {} cookies from response", set_cookie_headers.len());
|
||||
cookie_store.store_cookies_from_response(&url, &set_cookie_headers);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !Self::is_redirect(response.status) {
|
||||
// Not a redirect - return the response for caller to consume body
|
||||
return Ok(response);
|
||||
}
|
||||
|
||||
if !request.options.follow_redirects {
|
||||
// Redirects disabled - return the redirect response as-is
|
||||
return Ok(response);
|
||||
}
|
||||
|
||||
// Check if we've exceeded max redirects
|
||||
if redirect_count >= self.max_redirects {
|
||||
// Drain the response before returning error
|
||||
let _ = response.drain().await;
|
||||
return Err(crate::error::Error::RequestError(format!(
|
||||
"Maximum redirect limit ({}) exceeded",
|
||||
self.max_redirects
|
||||
)));
|
||||
}
|
||||
|
||||
// Extract Location header before draining (headers are available immediately)
|
||||
// HTTP headers are case-insensitive, so we need to search for any casing
|
||||
let location = response
|
||||
.headers
|
||||
.iter()
|
||||
.find(|(k, _)| k.eq_ignore_ascii_case("location"))
|
||||
.map(|(_, v)| v.clone())
|
||||
.ok_or_else(|| {
|
||||
crate::error::Error::RequestError(
|
||||
"Redirect response missing Location header".to_string(),
|
||||
)
|
||||
})?;
|
||||
|
||||
// Also get status before draining
|
||||
let status = response.status;
|
||||
|
||||
send_event(HttpResponseEvent::Info("Ignoring the response body".to_string()));
|
||||
|
||||
// Drain the redirect response body before following
|
||||
response.drain().await?;
|
||||
|
||||
// Update the request URL
|
||||
current_url = if location.starts_with("http://") || location.starts_with("https://") {
|
||||
// Absolute URL
|
||||
location
|
||||
} else if location.starts_with('/') {
|
||||
// Absolute path - need to extract base URL from current request
|
||||
let base_url = Self::extract_base_url(¤t_url)?;
|
||||
format!("{}{}", base_url, location)
|
||||
} else {
|
||||
// Relative path - need to resolve relative to current path
|
||||
let base_path = Self::extract_base_path(¤t_url)?;
|
||||
format!("{}/{}", base_path, location)
|
||||
};
|
||||
|
||||
// Determine redirect behavior based on status code and method
|
||||
let behavior = if status == 303 {
|
||||
// 303 See Other always changes to GET
|
||||
RedirectBehavior::DropBody
|
||||
} else if (status == 301 || status == 302) && current_method == "POST" {
|
||||
// For 301/302, change POST to GET (common browser behavior)
|
||||
RedirectBehavior::DropBody
|
||||
} else {
|
||||
// For 307 and 308, the method and body are preserved
|
||||
// Also for 301/302 with non-POST methods
|
||||
RedirectBehavior::Preserve
|
||||
};
|
||||
|
||||
send_event(HttpResponseEvent::Redirect {
|
||||
url: current_url.clone(),
|
||||
status,
|
||||
behavior: behavior.clone(),
|
||||
});
|
||||
|
||||
// Handle method changes for certain redirect codes
|
||||
if matches!(behavior, RedirectBehavior::DropBody) {
|
||||
if current_method != "GET" {
|
||||
current_method = "GET".to_string();
|
||||
}
|
||||
// Remove content-related headers
|
||||
current_headers.retain(|h| {
|
||||
let name_lower = h.0.to_lowercase();
|
||||
!name_lower.starts_with("content-") && name_lower != "transfer-encoding"
|
||||
});
|
||||
}
|
||||
|
||||
// Reset body for next iteration (since it was moved in the send call)
|
||||
// For redirects that change method to GET or for all redirects since body was consumed
|
||||
current_body = None;
|
||||
|
||||
redirect_count += 1;
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if a status code indicates a redirect
|
||||
fn is_redirect(status: u16) -> bool {
|
||||
matches!(status, 301 | 302 | 303 | 307 | 308)
|
||||
}
|
||||
|
||||
/// Extract the base URL (scheme + host) from a full URL
|
||||
fn extract_base_url(url: &str) -> Result<String> {
|
||||
// Find the position after "://"
|
||||
let scheme_end = url.find("://").ok_or_else(|| {
|
||||
crate::error::Error::RequestError(format!("Invalid URL format: {}", url))
|
||||
})?;
|
||||
|
||||
// Find the first '/' after the scheme
|
||||
let path_start = url[scheme_end + 3..].find('/');
|
||||
|
||||
if let Some(idx) = path_start {
|
||||
Ok(url[..scheme_end + 3 + idx].to_string())
|
||||
} else {
|
||||
// No path, return entire URL
|
||||
Ok(url.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
/// Extract the base path (everything except the last segment) from a URL
|
||||
fn extract_base_path(url: &str) -> Result<String> {
|
||||
if let Some(last_slash) = url.rfind('/') {
|
||||
// Don't include the trailing slash if it's part of the host
|
||||
if url[..last_slash].ends_with("://") || url[..last_slash].ends_with(':') {
|
||||
Ok(url.to_string())
|
||||
} else {
|
||||
Ok(url[..last_slash].to_string())
|
||||
}
|
||||
} else {
|
||||
Ok(url.to_string())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::decompress::ContentEncoding;
|
||||
use crate::sender::{HttpResponseEvent, HttpSender};
|
||||
use async_trait::async_trait;
|
||||
use std::pin::Pin;
|
||||
use std::sync::Arc;
|
||||
use tokio::io::AsyncRead;
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
/// Mock sender for testing
|
||||
struct MockSender {
|
||||
responses: Arc<Mutex<Vec<MockResponse>>>,
|
||||
}
|
||||
|
||||
struct MockResponse {
|
||||
status: u16,
|
||||
headers: Vec<(String, String)>,
|
||||
body: Vec<u8>,
|
||||
}
|
||||
|
||||
impl MockSender {
|
||||
fn new(responses: Vec<MockResponse>) -> Self {
|
||||
Self { responses: Arc::new(Mutex::new(responses)) }
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl HttpSender for MockSender {
|
||||
async fn send(
|
||||
&self,
|
||||
_request: SendableHttpRequest,
|
||||
_event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
) -> Result<HttpResponse> {
|
||||
let mut responses = self.responses.lock().await;
|
||||
if responses.is_empty() {
|
||||
Err(crate::error::Error::RequestError("No more mock responses".to_string()))
|
||||
} else {
|
||||
let mock = responses.remove(0);
|
||||
// Create a simple in-memory stream from the body
|
||||
let body_stream: Pin<Box<dyn AsyncRead + Send>> =
|
||||
Box::pin(std::io::Cursor::new(mock.body));
|
||||
Ok(HttpResponse::new(
|
||||
mock.status,
|
||||
None, // status_reason
|
||||
mock.headers,
|
||||
Vec::new(),
|
||||
None, // content_length
|
||||
"https://example.com".to_string(), // url
|
||||
None, // remote_addr
|
||||
Some("HTTP/1.1".to_string()), // version
|
||||
body_stream,
|
||||
ContentEncoding::Identity,
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_transaction_no_redirect() {
|
||||
let response = MockResponse { status: 200, headers: Vec::new(), body: b"OK".to_vec() };
|
||||
let sender = MockSender::new(vec![response]);
|
||||
let transaction = HttpTransaction::new(sender);
|
||||
|
||||
let request = SendableHttpRequest {
|
||||
url: "https://example.com".to_string(),
|
||||
method: "GET".to_string(),
|
||||
headers: vec![],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let (_tx, rx) = tokio::sync::watch::channel(false);
|
||||
let (event_tx, _event_rx) = mpsc::channel(100);
|
||||
let result = transaction.execute_with_cancellation(request, rx, event_tx).await.unwrap();
|
||||
assert_eq!(result.status, 200);
|
||||
|
||||
// Consume the body to verify it
|
||||
let (body, _) = result.bytes().await.unwrap();
|
||||
assert_eq!(body, b"OK");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_transaction_single_redirect() {
|
||||
let redirect_headers =
|
||||
vec![("Location".to_string(), "https://example.com/new".to_string())];
|
||||
|
||||
let responses = vec![
|
||||
MockResponse { status: 302, headers: redirect_headers, body: vec![] },
|
||||
MockResponse { status: 200, headers: Vec::new(), body: b"Final".to_vec() },
|
||||
];
|
||||
|
||||
let sender = MockSender::new(responses);
|
||||
let transaction = HttpTransaction::new(sender);
|
||||
|
||||
let request = SendableHttpRequest {
|
||||
url: "https://example.com/old".to_string(),
|
||||
method: "GET".to_string(),
|
||||
options: crate::types::SendableHttpRequestOptions {
|
||||
follow_redirects: true,
|
||||
..Default::default()
|
||||
},
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let (_tx, rx) = tokio::sync::watch::channel(false);
|
||||
let (event_tx, _event_rx) = mpsc::channel(100);
|
||||
let result = transaction.execute_with_cancellation(request, rx, event_tx).await.unwrap();
|
||||
assert_eq!(result.status, 200);
|
||||
|
||||
let (body, _) = result.bytes().await.unwrap();
|
||||
assert_eq!(body, b"Final");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_transaction_max_redirects_exceeded() {
|
||||
let redirect_headers =
|
||||
vec![("Location".to_string(), "https://example.com/loop".to_string())];
|
||||
|
||||
// Create more redirects than allowed
|
||||
let responses: Vec<MockResponse> = (0..12)
|
||||
.map(|_| MockResponse { status: 302, headers: redirect_headers.clone(), body: vec![] })
|
||||
.collect();
|
||||
|
||||
let sender = MockSender::new(responses);
|
||||
let transaction = HttpTransaction::with_max_redirects(sender, 10);
|
||||
|
||||
let request = SendableHttpRequest {
|
||||
url: "https://example.com/start".to_string(),
|
||||
method: "GET".to_string(),
|
||||
options: crate::types::SendableHttpRequestOptions {
|
||||
follow_redirects: true,
|
||||
..Default::default()
|
||||
},
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let (_tx, rx) = tokio::sync::watch::channel(false);
|
||||
let (event_tx, _event_rx) = mpsc::channel(100);
|
||||
let result = transaction.execute_with_cancellation(request, rx, event_tx).await;
|
||||
if let Err(crate::error::Error::RequestError(msg)) = result {
|
||||
assert!(msg.contains("Maximum redirect limit"));
|
||||
} else {
|
||||
panic!("Expected RequestError with max redirect message. Got {result:?}");
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_is_redirect() {
|
||||
assert!(HttpTransaction::<MockSender>::is_redirect(301));
|
||||
assert!(HttpTransaction::<MockSender>::is_redirect(302));
|
||||
assert!(HttpTransaction::<MockSender>::is_redirect(303));
|
||||
assert!(HttpTransaction::<MockSender>::is_redirect(307));
|
||||
assert!(HttpTransaction::<MockSender>::is_redirect(308));
|
||||
assert!(!HttpTransaction::<MockSender>::is_redirect(200));
|
||||
assert!(!HttpTransaction::<MockSender>::is_redirect(404));
|
||||
assert!(!HttpTransaction::<MockSender>::is_redirect(500));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_base_url() {
|
||||
let result =
|
||||
HttpTransaction::<MockSender>::extract_base_url("https://example.com/path/to/resource");
|
||||
assert_eq!(result.unwrap(), "https://example.com");
|
||||
|
||||
let result = HttpTransaction::<MockSender>::extract_base_url("http://localhost:8080/api");
|
||||
assert_eq!(result.unwrap(), "http://localhost:8080");
|
||||
|
||||
let result = HttpTransaction::<MockSender>::extract_base_url("invalid-url");
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_base_path() {
|
||||
let result = HttpTransaction::<MockSender>::extract_base_path(
|
||||
"https://example.com/path/to/resource",
|
||||
);
|
||||
assert_eq!(result.unwrap(), "https://example.com/path/to");
|
||||
|
||||
let result = HttpTransaction::<MockSender>::extract_base_path("https://example.com/single");
|
||||
assert_eq!(result.unwrap(), "https://example.com");
|
||||
|
||||
let result = HttpTransaction::<MockSender>::extract_base_path("https://example.com/");
|
||||
assert_eq!(result.unwrap(), "https://example.com");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_cookie_injection() {
|
||||
// Create a mock sender that verifies the Cookie header was injected
|
||||
struct CookieVerifyingSender {
|
||||
expected_cookie: String,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl HttpSender for CookieVerifyingSender {
|
||||
async fn send(
|
||||
&self,
|
||||
request: SendableHttpRequest,
|
||||
_event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
) -> Result<HttpResponse> {
|
||||
// Verify the Cookie header was injected
|
||||
let cookie_header =
|
||||
request.headers.iter().find(|(k, _)| k.eq_ignore_ascii_case("cookie"));
|
||||
|
||||
assert!(cookie_header.is_some(), "Cookie header should be present");
|
||||
assert!(
|
||||
cookie_header.unwrap().1.contains(&self.expected_cookie),
|
||||
"Cookie header should contain expected value"
|
||||
);
|
||||
|
||||
let body_stream: Pin<Box<dyn AsyncRead + Send>> =
|
||||
Box::pin(std::io::Cursor::new(vec![]));
|
||||
Ok(HttpResponse::new(
|
||||
200,
|
||||
None,
|
||||
Vec::new(),
|
||||
Vec::new(),
|
||||
None,
|
||||
"https://example.com".to_string(),
|
||||
None,
|
||||
Some("HTTP/1.1".to_string()),
|
||||
body_stream,
|
||||
ContentEncoding::Identity,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
use yaak_models::models::{Cookie, CookieDomain, CookieExpires};
|
||||
|
||||
// Create a cookie store with a test cookie
|
||||
let cookie = Cookie {
|
||||
raw_cookie: "session=abc123".to_string(),
|
||||
domain: CookieDomain::HostOnly("example.com".to_string()),
|
||||
expires: CookieExpires::SessionEnd,
|
||||
path: ("/".to_string(), false),
|
||||
};
|
||||
let cookie_store = CookieStore::from_cookies(vec![cookie]);
|
||||
|
||||
let sender = CookieVerifyingSender { expected_cookie: "session=abc123".to_string() };
|
||||
let transaction = HttpTransaction::with_cookie_store(sender, cookie_store);
|
||||
|
||||
let request = SendableHttpRequest {
|
||||
url: "https://example.com/api".to_string(),
|
||||
method: "GET".to_string(),
|
||||
headers: vec![],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let (_tx, rx) = tokio::sync::watch::channel(false);
|
||||
let (event_tx, _event_rx) = mpsc::channel(100);
|
||||
let result = transaction.execute_with_cancellation(request, rx, event_tx).await;
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_set_cookie_parsing() {
|
||||
// Create a cookie store
|
||||
let cookie_store = CookieStore::new();
|
||||
|
||||
// Mock sender that returns a Set-Cookie header
|
||||
struct SetCookieSender;
|
||||
|
||||
#[async_trait]
|
||||
impl HttpSender for SetCookieSender {
|
||||
async fn send(
|
||||
&self,
|
||||
_request: SendableHttpRequest,
|
||||
_event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
) -> Result<HttpResponse> {
|
||||
let headers =
|
||||
vec![("set-cookie".to_string(), "session=xyz789; Path=/".to_string())];
|
||||
|
||||
let body_stream: Pin<Box<dyn AsyncRead + Send>> =
|
||||
Box::pin(std::io::Cursor::new(vec![]));
|
||||
Ok(HttpResponse::new(
|
||||
200,
|
||||
None,
|
||||
headers,
|
||||
Vec::new(),
|
||||
None,
|
||||
"https://example.com".to_string(),
|
||||
None,
|
||||
Some("HTTP/1.1".to_string()),
|
||||
body_stream,
|
||||
ContentEncoding::Identity,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
let sender = SetCookieSender;
|
||||
let transaction = HttpTransaction::with_cookie_store(sender, cookie_store.clone());
|
||||
|
||||
let request = SendableHttpRequest {
|
||||
url: "https://example.com/login".to_string(),
|
||||
method: "POST".to_string(),
|
||||
headers: vec![],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let (_tx, rx) = tokio::sync::watch::channel(false);
|
||||
let (event_tx, _event_rx) = mpsc::channel(100);
|
||||
let result = transaction.execute_with_cancellation(request, rx, event_tx).await;
|
||||
assert!(result.is_ok());
|
||||
|
||||
// Verify the cookie was stored
|
||||
let cookies = cookie_store.get_all_cookies();
|
||||
assert_eq!(cookies.len(), 1);
|
||||
assert!(cookies[0].raw_cookie.contains("session=xyz789"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multiple_set_cookie_headers() {
|
||||
// Create a cookie store
|
||||
let cookie_store = CookieStore::new();
|
||||
|
||||
// Mock sender that returns multiple Set-Cookie headers
|
||||
struct MultiSetCookieSender;
|
||||
|
||||
#[async_trait]
|
||||
impl HttpSender for MultiSetCookieSender {
|
||||
async fn send(
|
||||
&self,
|
||||
_request: SendableHttpRequest,
|
||||
_event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
) -> Result<HttpResponse> {
|
||||
// Multiple Set-Cookie headers (this is standard HTTP behavior)
|
||||
let headers = vec![
|
||||
("set-cookie".to_string(), "session=abc123; Path=/".to_string()),
|
||||
("set-cookie".to_string(), "user_id=42; Path=/".to_string()),
|
||||
(
|
||||
"set-cookie".to_string(),
|
||||
"preferences=dark; Path=/; Max-Age=86400".to_string(),
|
||||
),
|
||||
];
|
||||
|
||||
let body_stream: Pin<Box<dyn AsyncRead + Send>> =
|
||||
Box::pin(std::io::Cursor::new(vec![]));
|
||||
Ok(HttpResponse::new(
|
||||
200,
|
||||
None,
|
||||
headers,
|
||||
Vec::new(),
|
||||
None,
|
||||
"https://example.com".to_string(),
|
||||
None,
|
||||
Some("HTTP/1.1".to_string()),
|
||||
body_stream,
|
||||
ContentEncoding::Identity,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
let sender = MultiSetCookieSender;
|
||||
let transaction = HttpTransaction::with_cookie_store(sender, cookie_store.clone());
|
||||
|
||||
let request = SendableHttpRequest {
|
||||
url: "https://example.com/login".to_string(),
|
||||
method: "POST".to_string(),
|
||||
headers: vec![],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let (_tx, rx) = tokio::sync::watch::channel(false);
|
||||
let (event_tx, _event_rx) = mpsc::channel(100);
|
||||
let result = transaction.execute_with_cancellation(request, rx, event_tx).await;
|
||||
assert!(result.is_ok());
|
||||
|
||||
// Verify all three cookies were stored
|
||||
let cookies = cookie_store.get_all_cookies();
|
||||
assert_eq!(cookies.len(), 3, "All three Set-Cookie headers should be parsed and stored");
|
||||
|
||||
let cookie_values: Vec<&str> = cookies.iter().map(|c| c.raw_cookie.as_str()).collect();
|
||||
assert!(
|
||||
cookie_values.iter().any(|c| c.contains("session=abc123")),
|
||||
"session cookie should be stored"
|
||||
);
|
||||
assert!(
|
||||
cookie_values.iter().any(|c| c.contains("user_id=42")),
|
||||
"user_id cookie should be stored"
|
||||
);
|
||||
assert!(
|
||||
cookie_values.iter().any(|c| c.contains("preferences=dark")),
|
||||
"preferences cookie should be stored"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_cookies_across_redirects() {
|
||||
use std::sync::atomic::{AtomicUsize, Ordering};
|
||||
|
||||
// Create a cookie store
|
||||
let cookie_store = CookieStore::new();
|
||||
|
||||
// Track request count
|
||||
let request_count = Arc::new(AtomicUsize::new(0));
|
||||
let request_count_clone = request_count.clone();
|
||||
|
||||
struct RedirectWithCookiesSender {
|
||||
request_count: Arc<AtomicUsize>,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl HttpSender for RedirectWithCookiesSender {
|
||||
async fn send(
|
||||
&self,
|
||||
request: SendableHttpRequest,
|
||||
_event_tx: mpsc::Sender<HttpResponseEvent>,
|
||||
) -> Result<HttpResponse> {
|
||||
let count = self.request_count.fetch_add(1, Ordering::SeqCst);
|
||||
|
||||
let (status, headers) = if count == 0 {
|
||||
// First request: return redirect with Set-Cookie
|
||||
let h = vec![
|
||||
("location".to_string(), "https://example.com/final".to_string()),
|
||||
("set-cookie".to_string(), "redirect_cookie=value1".to_string()),
|
||||
];
|
||||
(302, h)
|
||||
} else {
|
||||
// Second request: verify cookie was sent
|
||||
let cookie_header =
|
||||
request.headers.iter().find(|(k, _)| k.eq_ignore_ascii_case("cookie"));
|
||||
|
||||
assert!(cookie_header.is_some(), "Cookie header should be present on redirect");
|
||||
assert!(
|
||||
cookie_header.unwrap().1.contains("redirect_cookie=value1"),
|
||||
"Redirect cookie should be included"
|
||||
);
|
||||
|
||||
(200, Vec::new())
|
||||
};
|
||||
|
||||
let body_stream: Pin<Box<dyn AsyncRead + Send>> =
|
||||
Box::pin(std::io::Cursor::new(vec![]));
|
||||
Ok(HttpResponse::new(
|
||||
status,
|
||||
None,
|
||||
headers,
|
||||
Vec::new(),
|
||||
None,
|
||||
"https://example.com".to_string(),
|
||||
None,
|
||||
Some("HTTP/1.1".to_string()),
|
||||
body_stream,
|
||||
ContentEncoding::Identity,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
let sender = RedirectWithCookiesSender { request_count: request_count_clone };
|
||||
let transaction = HttpTransaction::with_cookie_store(sender, cookie_store);
|
||||
|
||||
let request = SendableHttpRequest {
|
||||
url: "https://example.com/start".to_string(),
|
||||
method: "GET".to_string(),
|
||||
headers: vec![],
|
||||
options: crate::types::SendableHttpRequestOptions {
|
||||
follow_redirects: true,
|
||||
..Default::default()
|
||||
},
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let (_tx, rx) = tokio::sync::watch::channel(false);
|
||||
let (event_tx, _event_rx) = mpsc::channel(100);
|
||||
let result = transaction.execute_with_cancellation(request, rx, event_tx).await;
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(request_count.load(Ordering::SeqCst), 2);
|
||||
}
|
||||
}
|
||||
@@ -1,981 +0,0 @@
|
||||
use crate::chained_reader::{ChainedReader, ReaderType};
|
||||
use crate::error::Error::RequestError;
|
||||
use crate::error::Result;
|
||||
use crate::path_placeholders::apply_path_placeholders;
|
||||
use crate::proto::ensure_proto;
|
||||
use bytes::Bytes;
|
||||
use log::warn;
|
||||
use std::collections::BTreeMap;
|
||||
use std::pin::Pin;
|
||||
use std::time::Duration;
|
||||
use tokio::io::AsyncRead;
|
||||
use yaak_common::serde::{get_bool, get_str, get_str_map};
|
||||
use yaak_models::models::HttpRequest;
|
||||
|
||||
pub(crate) const MULTIPART_BOUNDARY: &str = "------YaakFormBoundary";
|
||||
|
||||
pub enum SendableBody {
|
||||
Bytes(Bytes),
|
||||
Stream(Pin<Box<dyn AsyncRead + Send + 'static>>),
|
||||
}
|
||||
|
||||
enum SendableBodyWithMeta {
|
||||
Bytes(Bytes),
|
||||
Stream {
|
||||
data: Pin<Box<dyn AsyncRead + Send + 'static>>,
|
||||
content_length: Option<usize>,
|
||||
},
|
||||
}
|
||||
|
||||
impl From<SendableBodyWithMeta> for SendableBody {
|
||||
fn from(value: SendableBodyWithMeta) -> Self {
|
||||
match value {
|
||||
SendableBodyWithMeta::Bytes(b) => SendableBody::Bytes(b),
|
||||
SendableBodyWithMeta::Stream { data, .. } => SendableBody::Stream(data),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct SendableHttpRequest {
|
||||
pub url: String,
|
||||
pub method: String,
|
||||
pub headers: Vec<(String, String)>,
|
||||
pub body: Option<SendableBody>,
|
||||
pub options: SendableHttpRequestOptions,
|
||||
}
|
||||
|
||||
#[derive(Default, Clone)]
|
||||
pub struct SendableHttpRequestOptions {
|
||||
pub timeout: Option<Duration>,
|
||||
pub follow_redirects: bool,
|
||||
}
|
||||
|
||||
impl SendableHttpRequest {
|
||||
pub async fn from_http_request(
|
||||
r: &HttpRequest,
|
||||
options: SendableHttpRequestOptions,
|
||||
) -> Result<Self> {
|
||||
let initial_headers = build_headers(r);
|
||||
let (body, headers) = build_body(&r.method, &r.body_type, &r.body, initial_headers).await?;
|
||||
|
||||
Ok(Self {
|
||||
url: build_url(r),
|
||||
method: r.method.to_uppercase(),
|
||||
headers,
|
||||
body: body.into(),
|
||||
options,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn insert_header(&mut self, header: (String, String)) {
|
||||
if let Some(existing) =
|
||||
self.headers.iter_mut().find(|h| h.0.to_lowercase() == header.0.to_lowercase())
|
||||
{
|
||||
existing.1 = header.1;
|
||||
} else {
|
||||
self.headers.push(header);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn append_query_params(url: &str, params: Vec<(String, String)>) -> String {
|
||||
let url_string = url.to_string();
|
||||
if params.is_empty() {
|
||||
return url.to_string();
|
||||
}
|
||||
|
||||
// Build query string
|
||||
let query_string = params
|
||||
.iter()
|
||||
.map(|(name, value)| {
|
||||
format!("{}={}", urlencoding::encode(name), urlencoding::encode(value))
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.join("&");
|
||||
|
||||
// Split URL into parts: base URL, query, and fragment
|
||||
let (base_and_query, fragment) = if let Some(hash_pos) = url_string.find('#') {
|
||||
let (before_hash, after_hash) = url_string.split_at(hash_pos);
|
||||
(before_hash.to_string(), Some(after_hash.to_string()))
|
||||
} else {
|
||||
(url_string, None)
|
||||
};
|
||||
|
||||
// Now handle query parameters on the base URL (without fragment)
|
||||
let mut result = if base_and_query.contains('?') {
|
||||
// Check if there's already a query string after the '?'
|
||||
let parts: Vec<&str> = base_and_query.splitn(2, '?').collect();
|
||||
if parts.len() == 2 && !parts[1].trim().is_empty() {
|
||||
// Append with & if there are existing parameters
|
||||
format!("{}&{}", base_and_query, query_string)
|
||||
} else {
|
||||
// Just append the new parameters directly (URL ends with '?')
|
||||
format!("{}{}", base_and_query, query_string)
|
||||
}
|
||||
} else {
|
||||
// No existing query parameters, add with '?'
|
||||
format!("{}?{}", base_and_query, query_string)
|
||||
};
|
||||
|
||||
// Re-append the fragment if it exists
|
||||
if let Some(fragment) = fragment {
|
||||
result.push_str(&fragment);
|
||||
}
|
||||
|
||||
result
|
||||
}
|
||||
|
||||
fn build_url(r: &HttpRequest) -> String {
|
||||
let (url_string, params) = apply_path_placeholders(&ensure_proto(&r.url), &r.url_parameters);
|
||||
append_query_params(
|
||||
&url_string,
|
||||
params
|
||||
.iter()
|
||||
.filter(|p| p.enabled && !p.name.is_empty())
|
||||
.map(|p| (p.name.clone(), p.value.clone()))
|
||||
.collect(),
|
||||
)
|
||||
}
|
||||
|
||||
fn build_headers(r: &HttpRequest) -> Vec<(String, String)> {
|
||||
r.headers
|
||||
.iter()
|
||||
.filter_map(|h| {
|
||||
if h.enabled && !h.name.is_empty() {
|
||||
Some((h.name.clone(), h.value.clone()))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
async fn build_body(
|
||||
method: &str,
|
||||
body_type: &Option<String>,
|
||||
body: &BTreeMap<String, serde_json::Value>,
|
||||
headers: Vec<(String, String)>,
|
||||
) -> Result<(Option<SendableBody>, Vec<(String, String)>)> {
|
||||
let body_type = match &body_type {
|
||||
None => return Ok((None, headers)),
|
||||
Some(t) => t,
|
||||
};
|
||||
|
||||
let (body, content_type) = match body_type.as_str() {
|
||||
"binary" => (build_binary_body(&body).await?, None),
|
||||
"graphql" => (build_graphql_body(&method, &body), Some("application/json".to_string())),
|
||||
"application/x-www-form-urlencoded" => {
|
||||
(build_form_body(&body), Some("application/x-www-form-urlencoded".to_string()))
|
||||
}
|
||||
"multipart/form-data" => build_multipart_body(&body, &headers).await?,
|
||||
_ if body.contains_key("text") => (build_text_body(&body), None),
|
||||
t => {
|
||||
warn!("Unsupported body type: {}", t);
|
||||
(None, None)
|
||||
}
|
||||
};
|
||||
|
||||
// Add or update the Content-Type header
|
||||
let mut headers = headers;
|
||||
if let Some(ct) = content_type {
|
||||
if let Some(existing) = headers.iter_mut().find(|h| h.0.to_lowercase() == "content-type") {
|
||||
existing.1 = ct;
|
||||
} else {
|
||||
headers.push(("Content-Type".to_string(), ct));
|
||||
}
|
||||
}
|
||||
|
||||
// Check if Transfer-Encoding: chunked is already set
|
||||
let has_chunked_encoding = headers.iter().any(|h| {
|
||||
h.0.to_lowercase() == "transfer-encoding" && h.1.to_lowercase().contains("chunked")
|
||||
});
|
||||
|
||||
// Add a Content-Length header only if chunked encoding is not being used
|
||||
if !has_chunked_encoding {
|
||||
let content_length = match body {
|
||||
Some(SendableBodyWithMeta::Bytes(ref bytes)) => Some(bytes.len()),
|
||||
Some(SendableBodyWithMeta::Stream { content_length, .. }) => content_length,
|
||||
None => None,
|
||||
};
|
||||
|
||||
if let Some(cl) = content_length {
|
||||
headers.push(("Content-Length".to_string(), cl.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
Ok((body.map(|b| b.into()), headers))
|
||||
}
|
||||
|
||||
fn build_form_body(body: &BTreeMap<String, serde_json::Value>) -> Option<SendableBodyWithMeta> {
|
||||
let form_params = match body.get("form").map(|f| f.as_array()) {
|
||||
Some(Some(f)) => f,
|
||||
_ => return None,
|
||||
};
|
||||
|
||||
let mut body = String::new();
|
||||
for p in form_params {
|
||||
let enabled = get_bool(p, "enabled", true);
|
||||
let name = get_str(p, "name");
|
||||
if !enabled || name.is_empty() {
|
||||
continue;
|
||||
}
|
||||
let value = get_str(p, "value");
|
||||
if !body.is_empty() {
|
||||
body.push('&');
|
||||
}
|
||||
body.push_str(&urlencoding::encode(&name));
|
||||
body.push('=');
|
||||
body.push_str(&urlencoding::encode(&value));
|
||||
}
|
||||
|
||||
if body.is_empty() { None } else { Some(SendableBodyWithMeta::Bytes(Bytes::from(body))) }
|
||||
}
|
||||
|
||||
async fn build_binary_body(
|
||||
body: &BTreeMap<String, serde_json::Value>,
|
||||
) -> Result<Option<SendableBodyWithMeta>> {
|
||||
let file_path = match body.get("filePath").map(|f| f.as_str()) {
|
||||
Some(Some(f)) => f,
|
||||
_ => return Ok(None),
|
||||
};
|
||||
|
||||
// Open a file for streaming
|
||||
let content_length = tokio::fs::metadata(file_path)
|
||||
.await
|
||||
.map_err(|e| RequestError(format!("Failed to get file metadata: {}", e)))?
|
||||
.len();
|
||||
|
||||
let file = tokio::fs::File::open(file_path)
|
||||
.await
|
||||
.map_err(|e| RequestError(format!("Failed to open file: {}", e)))?;
|
||||
|
||||
Ok(Some(SendableBodyWithMeta::Stream {
|
||||
data: Box::pin(file),
|
||||
content_length: Some(content_length as usize),
|
||||
}))
|
||||
}
|
||||
|
||||
fn build_text_body(body: &BTreeMap<String, serde_json::Value>) -> Option<SendableBodyWithMeta> {
|
||||
let text = get_str_map(body, "text");
|
||||
if text.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(SendableBodyWithMeta::Bytes(Bytes::from(text.to_string())))
|
||||
}
|
||||
}
|
||||
|
||||
fn build_graphql_body(
|
||||
method: &str,
|
||||
body: &BTreeMap<String, serde_json::Value>,
|
||||
) -> Option<SendableBodyWithMeta> {
|
||||
let query = get_str_map(body, "query");
|
||||
let variables = get_str_map(body, "variables");
|
||||
|
||||
if method.to_lowercase() == "get" {
|
||||
// GraphQL GET requests use query parameters, not a body
|
||||
return None;
|
||||
}
|
||||
|
||||
let body = if variables.trim().is_empty() {
|
||||
format!(r#"{{"query":{}}}"#, serde_json::to_string(&query).unwrap_or_default())
|
||||
} else {
|
||||
format!(
|
||||
r#"{{"query":{},"variables":{}}}"#,
|
||||
serde_json::to_string(&query).unwrap_or_default(),
|
||||
variables
|
||||
)
|
||||
};
|
||||
|
||||
Some(SendableBodyWithMeta::Bytes(Bytes::from(body)))
|
||||
}
|
||||
|
||||
async fn build_multipart_body(
|
||||
body: &BTreeMap<String, serde_json::Value>,
|
||||
headers: &Vec<(String, String)>,
|
||||
) -> Result<(Option<SendableBodyWithMeta>, Option<String>)> {
|
||||
let boundary = extract_boundary_from_headers(headers);
|
||||
|
||||
let form_params = match body.get("form").map(|f| f.as_array()) {
|
||||
Some(Some(f)) => f,
|
||||
_ => return Ok((None, None)),
|
||||
};
|
||||
|
||||
// Build a list of readers for streaming and calculate total content length
|
||||
let mut readers: Vec<ReaderType> = Vec::new();
|
||||
let mut has_content = false;
|
||||
let mut total_size: usize = 0;
|
||||
|
||||
for p in form_params {
|
||||
let enabled = get_bool(p, "enabled", true);
|
||||
let name = get_str(p, "name");
|
||||
if !enabled || name.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
has_content = true;
|
||||
|
||||
// Add boundary delimiter
|
||||
let boundary_bytes = format!("--{}\r\n", boundary).into_bytes();
|
||||
total_size += boundary_bytes.len();
|
||||
readers.push(ReaderType::Bytes(boundary_bytes));
|
||||
|
||||
let file_path = get_str(p, "file");
|
||||
let value = get_str(p, "value");
|
||||
let content_type = get_str(p, "contentType");
|
||||
|
||||
if file_path.is_empty() {
|
||||
// Text field
|
||||
let header = if !content_type.is_empty() {
|
||||
format!(
|
||||
"Content-Disposition: form-data; name=\"{}\"\r\nContent-Type: {}\r\n\r\n{}",
|
||||
name, content_type, value
|
||||
)
|
||||
} else {
|
||||
format!("Content-Disposition: form-data; name=\"{}\"\r\n\r\n{}", name, value)
|
||||
};
|
||||
let header_bytes = header.into_bytes();
|
||||
total_size += header_bytes.len();
|
||||
readers.push(ReaderType::Bytes(header_bytes));
|
||||
} else {
|
||||
// File field - validate that file exists first
|
||||
if !tokio::fs::try_exists(file_path).await.unwrap_or(false) {
|
||||
return Err(RequestError(format!("File not found: {}", file_path)));
|
||||
}
|
||||
|
||||
// Get file size for content length calculation
|
||||
let file_metadata = tokio::fs::metadata(file_path)
|
||||
.await
|
||||
.map_err(|e| RequestError(format!("Failed to get file metadata: {}", e)))?;
|
||||
let file_size = file_metadata.len() as usize;
|
||||
|
||||
let filename = get_str(p, "filename");
|
||||
let filename = if filename.is_empty() {
|
||||
std::path::Path::new(file_path)
|
||||
.file_name()
|
||||
.and_then(|n| n.to_str())
|
||||
.unwrap_or("file")
|
||||
} else {
|
||||
filename
|
||||
};
|
||||
|
||||
// Add content type
|
||||
let mime_type = if !content_type.is_empty() {
|
||||
content_type.to_string()
|
||||
} else {
|
||||
// Guess mime type from file extension
|
||||
mime_guess::from_path(file_path).first_or_octet_stream().to_string()
|
||||
};
|
||||
|
||||
let header = format!(
|
||||
"Content-Disposition: form-data; name=\"{}\"; filename=\"{}\"\r\nContent-Type: {}\r\n\r\n",
|
||||
name, filename, mime_type
|
||||
);
|
||||
let header_bytes = header.into_bytes();
|
||||
total_size += header_bytes.len();
|
||||
total_size += file_size;
|
||||
readers.push(ReaderType::Bytes(header_bytes));
|
||||
|
||||
// Add a file path for streaming
|
||||
readers.push(ReaderType::FilePath(file_path.to_string()));
|
||||
}
|
||||
|
||||
let line_ending = b"\r\n".to_vec();
|
||||
total_size += line_ending.len();
|
||||
readers.push(ReaderType::Bytes(line_ending));
|
||||
}
|
||||
|
||||
if has_content {
|
||||
// Add the final boundary
|
||||
let final_boundary = format!("--{}--\r\n", boundary).into_bytes();
|
||||
total_size += final_boundary.len();
|
||||
readers.push(ReaderType::Bytes(final_boundary));
|
||||
|
||||
let content_type = format!("multipart/form-data; boundary={}", boundary);
|
||||
let stream = ChainedReader::new(readers);
|
||||
Ok((
|
||||
Some(SendableBodyWithMeta::Stream {
|
||||
data: Box::pin(stream),
|
||||
content_length: Some(total_size),
|
||||
}),
|
||||
Some(content_type),
|
||||
))
|
||||
} else {
|
||||
Ok((None, None))
|
||||
}
|
||||
}
|
||||
|
||||
fn extract_boundary_from_headers(headers: &Vec<(String, String)>) -> String {
|
||||
headers
|
||||
.iter()
|
||||
.find(|h| h.0.to_lowercase() == "content-type")
|
||||
.and_then(|h| {
|
||||
// Extract boundary from the Content-Type header (e.g., "multipart/form-data; boundary=xyz")
|
||||
h.1.split(';')
|
||||
.find(|part| part.trim().starts_with("boundary="))
|
||||
.and_then(|boundary_part| boundary_part.split('=').nth(1))
|
||||
.map(|b| b.trim().to_string())
|
||||
})
|
||||
.unwrap_or_else(|| MULTIPART_BOUNDARY.to_string())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use bytes::Bytes;
|
||||
use serde_json::json;
|
||||
use std::collections::BTreeMap;
|
||||
use yaak_models::models::{HttpRequest, HttpUrlParameter};
|
||||
|
||||
#[test]
|
||||
fn test_build_url_no_params() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api".to_string(),
|
||||
url_parameters: vec![],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com/api");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_params() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api".to_string(),
|
||||
url_parameters: vec![
|
||||
HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "foo".to_string(),
|
||||
value: "bar".to_string(),
|
||||
id: None,
|
||||
},
|
||||
HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "baz".to_string(),
|
||||
value: "qux".to_string(),
|
||||
id: None,
|
||||
},
|
||||
],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com/api?foo=bar&baz=qux");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_disabled_params() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api".to_string(),
|
||||
url_parameters: vec![
|
||||
HttpUrlParameter {
|
||||
enabled: false,
|
||||
name: "disabled".to_string(),
|
||||
value: "value".to_string(),
|
||||
id: None,
|
||||
},
|
||||
HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "enabled".to_string(),
|
||||
value: "value".to_string(),
|
||||
id: None,
|
||||
},
|
||||
],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com/api?enabled=value");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_existing_query() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api?existing=param".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "new".to_string(),
|
||||
value: "value".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com/api?existing=param&new=value");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_empty_existing_query() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api?".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "new".to_string(),
|
||||
value: "value".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com/api?new=value");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_special_chars() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "special chars!@#".to_string(),
|
||||
value: "value with spaces & symbols".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(
|
||||
result,
|
||||
"https://example.com/api?special%20chars%21%40%23=value%20with%20spaces%20%26%20symbols"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_adds_protocol() {
|
||||
let r = HttpRequest {
|
||||
url: "example.com/api".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "foo".to_string(),
|
||||
value: "bar".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
// ensure_proto defaults to http:// for regular domains
|
||||
assert_eq!(result, "http://example.com/api?foo=bar");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_adds_https_for_dev_domain() {
|
||||
let r = HttpRequest {
|
||||
url: "example.dev/api".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "foo".to_string(),
|
||||
value: "bar".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
// .dev domains force https
|
||||
assert_eq!(result, "https://example.dev/api?foo=bar");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_fragment() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api#section".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "foo".to_string(),
|
||||
value: "bar".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com/api?foo=bar#section");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_existing_query_and_fragment() {
|
||||
let r = HttpRequest {
|
||||
url: "https://yaak.app?foo=bar#some-hash".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "baz".to_string(),
|
||||
value: "qux".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://yaak.app?foo=bar&baz=qux#some-hash");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_empty_query_and_fragment() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api?#section".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "foo".to_string(),
|
||||
value: "bar".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com/api?foo=bar#section");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_fragment_containing_special_chars() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com#section/with/slashes?and=fake&query".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "real".to_string(),
|
||||
value: "param".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com?real=param#section/with/slashes?and=fake&query");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_preserves_empty_fragment() {
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com/api#".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "foo".to_string(),
|
||||
value: "bar".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
assert_eq!(result, "https://example.com/api?foo=bar#");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_url_with_multiple_fragments() {
|
||||
// Testing edge case where the URL has multiple # characters (though technically invalid)
|
||||
let r = HttpRequest {
|
||||
url: "https://example.com#section#subsection".to_string(),
|
||||
url_parameters: vec![HttpUrlParameter {
|
||||
enabled: true,
|
||||
name: "foo".to_string(),
|
||||
value: "bar".to_string(),
|
||||
id: None,
|
||||
}],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = build_url(&r);
|
||||
// Should treat everything after first # as fragment
|
||||
assert_eq!(result, "https://example.com?foo=bar#section#subsection");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_text_body() {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("text".to_string(), json!("Hello, World!"));
|
||||
|
||||
let result = build_text_body(&body);
|
||||
match result {
|
||||
Some(SendableBodyWithMeta::Bytes(bytes)) => {
|
||||
assert_eq!(bytes, Bytes::from("Hello, World!"))
|
||||
}
|
||||
_ => panic!("Expected Some(SendableBody::Bytes)"),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_text_body_empty() {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("text".to_string(), json!(""));
|
||||
|
||||
let result = build_text_body(&body);
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_text_body_missing() {
|
||||
let body = BTreeMap::new();
|
||||
|
||||
let result = build_text_body(&body);
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_form_urlencoded_body() -> Result<()> {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert(
|
||||
"form".to_string(),
|
||||
json!([
|
||||
{ "enabled": true, "name": "basic", "value": "aaa"},
|
||||
{ "enabled": true, "name": "fUnkey Stuff!$*#(", "value": "*)%&#$)@ *$#)@&"},
|
||||
{ "enabled": false, "name": "disabled", "value": "won't show"},
|
||||
]),
|
||||
);
|
||||
|
||||
let result = build_form_body(&body);
|
||||
match result {
|
||||
Some(SendableBodyWithMeta::Bytes(bytes)) => {
|
||||
let expected = "basic=aaa&fUnkey%20Stuff%21%24%2A%23%28=%2A%29%25%26%23%24%29%40%20%2A%24%23%29%40%26";
|
||||
assert_eq!(bytes, Bytes::from(expected));
|
||||
}
|
||||
_ => panic!("Expected Some(SendableBody::Bytes)"),
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_form_urlencoded_body_missing_form() {
|
||||
let body = BTreeMap::new();
|
||||
let result = build_form_body(&body);
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_binary_body() -> Result<()> {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("filePath".to_string(), json!("./tests/test.txt"));
|
||||
|
||||
let result = build_binary_body(&body).await?;
|
||||
assert!(matches!(result, Some(SendableBodyWithMeta::Stream { .. })));
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_binary_body_file_not_found() {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("filePath".to_string(), json!("./nonexistent/file.txt"));
|
||||
|
||||
let result = build_binary_body(&body).await;
|
||||
assert!(result.is_err());
|
||||
if let Err(e) = result {
|
||||
assert!(matches!(e, RequestError(_)));
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_graphql_body_with_variables() {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("query".to_string(), json!("{ user(id: $id) { name } }"));
|
||||
body.insert("variables".to_string(), json!(r#"{"id": "123"}"#));
|
||||
|
||||
let result = build_graphql_body("POST", &body);
|
||||
match result {
|
||||
Some(SendableBodyWithMeta::Bytes(bytes)) => {
|
||||
let expected =
|
||||
r#"{"query":"{ user(id: $id) { name } }","variables":{"id": "123"}}"#;
|
||||
assert_eq!(bytes, Bytes::from(expected));
|
||||
}
|
||||
_ => panic!("Expected Some(SendableBody::Bytes)"),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_graphql_body_without_variables() {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("query".to_string(), json!("{ users { name } }"));
|
||||
body.insert("variables".to_string(), json!(""));
|
||||
|
||||
let result = build_graphql_body("POST", &body);
|
||||
match result {
|
||||
Some(SendableBodyWithMeta::Bytes(bytes)) => {
|
||||
let expected = r#"{"query":"{ users { name } }"}"#;
|
||||
assert_eq!(bytes, Bytes::from(expected));
|
||||
}
|
||||
_ => panic!("Expected Some(SendableBody::Bytes)"),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_graphql_body_get_method() {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("query".to_string(), json!("{ users { name } }"));
|
||||
|
||||
let result = build_graphql_body("GET", &body);
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multipart_body_text_fields() -> Result<()> {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert(
|
||||
"form".to_string(),
|
||||
json!([
|
||||
{ "enabled": true, "name": "field1", "value": "value1", "file": "" },
|
||||
{ "enabled": true, "name": "field2", "value": "value2", "file": "" },
|
||||
{ "enabled": false, "name": "disabled", "value": "won't show", "file": "" },
|
||||
]),
|
||||
);
|
||||
|
||||
let (result, content_type) = build_multipart_body(&body, &vec![]).await?;
|
||||
assert!(content_type.is_some());
|
||||
|
||||
match result {
|
||||
Some(SendableBodyWithMeta::Stream { data: mut stream, content_length }) => {
|
||||
// Read the entire stream to verify content
|
||||
let mut buf = Vec::new();
|
||||
use tokio::io::AsyncReadExt;
|
||||
stream.read_to_end(&mut buf).await.expect("Failed to read stream");
|
||||
let body_str = String::from_utf8_lossy(&buf);
|
||||
assert_eq!(
|
||||
body_str,
|
||||
"--------YaakFormBoundary\r\nContent-Disposition: form-data; name=\"field1\"\r\n\r\nvalue1\r\n--------YaakFormBoundary\r\nContent-Disposition: form-data; name=\"field2\"\r\n\r\nvalue2\r\n--------YaakFormBoundary--\r\n",
|
||||
);
|
||||
assert_eq!(content_length, Some(body_str.len()));
|
||||
}
|
||||
_ => panic!("Expected Some(SendableBody::Stream)"),
|
||||
}
|
||||
|
||||
assert_eq!(
|
||||
content_type.unwrap(),
|
||||
format!("multipart/form-data; boundary={}", MULTIPART_BOUNDARY)
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multipart_body_with_file() -> Result<()> {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert(
|
||||
"form".to_string(),
|
||||
json!([
|
||||
{ "enabled": true, "name": "file_field", "file": "./tests/test.txt", "filename": "custom.txt", "contentType": "text/plain" },
|
||||
]),
|
||||
);
|
||||
|
||||
let (result, content_type) = build_multipart_body(&body, &vec![]).await?;
|
||||
assert!(content_type.is_some());
|
||||
|
||||
match result {
|
||||
Some(SendableBodyWithMeta::Stream { data: mut stream, content_length }) => {
|
||||
// Read the entire stream to verify content
|
||||
let mut buf = Vec::new();
|
||||
use tokio::io::AsyncReadExt;
|
||||
stream.read_to_end(&mut buf).await.expect("Failed to read stream");
|
||||
let body_str = String::from_utf8_lossy(&buf);
|
||||
assert_eq!(
|
||||
body_str,
|
||||
"--------YaakFormBoundary\r\nContent-Disposition: form-data; name=\"file_field\"; filename=\"custom.txt\"\r\nContent-Type: text/plain\r\n\r\nThis is a test file!\n\r\n--------YaakFormBoundary--\r\n"
|
||||
);
|
||||
assert_eq!(content_length, Some(body_str.len()));
|
||||
}
|
||||
_ => panic!("Expected Some(SendableBody::Stream)"),
|
||||
}
|
||||
|
||||
assert_eq!(
|
||||
content_type.unwrap(),
|
||||
format!("multipart/form-data; boundary={}", MULTIPART_BOUNDARY)
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multipart_body_empty() -> Result<()> {
|
||||
let body = BTreeMap::new();
|
||||
let (result, content_type) = build_multipart_body(&body, &vec![]).await?;
|
||||
assert!(result.is_none());
|
||||
assert_eq!(content_type, None);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_boundary_from_headers_with_custom_boundary() {
|
||||
let headers = vec![(
|
||||
"Content-Type".to_string(),
|
||||
"multipart/form-data; boundary=customBoundary123".to_string(),
|
||||
)];
|
||||
let boundary = extract_boundary_from_headers(&headers);
|
||||
assert_eq!(boundary, "customBoundary123");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_boundary_from_headers_default() {
|
||||
let headers = vec![("Accept".to_string(), "*/*".to_string())];
|
||||
let boundary = extract_boundary_from_headers(&headers);
|
||||
assert_eq!(boundary, MULTIPART_BOUNDARY);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_boundary_from_headers_no_boundary_in_content_type() {
|
||||
let headers = vec![("Content-Type".to_string(), "multipart/form-data".to_string())];
|
||||
let boundary = extract_boundary_from_headers(&headers);
|
||||
assert_eq!(boundary, MULTIPART_BOUNDARY);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_boundary_case_insensitive() {
|
||||
let headers = vec![(
|
||||
"Content-Type".to_string(),
|
||||
"multipart/form-data; boundary=myBoundary".to_string(),
|
||||
)];
|
||||
let boundary = extract_boundary_from_headers(&headers);
|
||||
assert_eq!(boundary, "myBoundary");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_no_content_length_with_chunked_encoding() -> Result<()> {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("text".to_string(), json!("Hello, World!"));
|
||||
|
||||
// Headers with Transfer-Encoding: chunked
|
||||
let headers = vec![("Transfer-Encoding".to_string(), "chunked".to_string())];
|
||||
|
||||
let (_, result_headers) =
|
||||
build_body("POST", &Some("text/plain".to_string()), &body, headers).await?;
|
||||
|
||||
// Verify that Content-Length is NOT present when Transfer-Encoding: chunked is set
|
||||
let has_content_length =
|
||||
result_headers.iter().any(|h| h.0.to_lowercase() == "content-length");
|
||||
assert!(!has_content_length, "Content-Length should not be present with chunked encoding");
|
||||
|
||||
// Verify that the Transfer-Encoding header is still present
|
||||
let has_chunked = result_headers.iter().any(|h| {
|
||||
h.0.to_lowercase() == "transfer-encoding" && h.1.to_lowercase().contains("chunked")
|
||||
});
|
||||
assert!(has_chunked, "Transfer-Encoding: chunked should be preserved");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_content_length_without_chunked_encoding() -> Result<()> {
|
||||
let mut body = BTreeMap::new();
|
||||
body.insert("text".to_string(), json!("Hello, World!"));
|
||||
|
||||
// Headers without Transfer-Encoding: chunked
|
||||
let headers = vec![];
|
||||
|
||||
let (_, result_headers) =
|
||||
build_body("POST", &Some("text/plain".to_string()), &body, headers).await?;
|
||||
|
||||
// Verify that Content-Length IS present when Transfer-Encoding: chunked is NOT set
|
||||
let content_length_header =
|
||||
result_headers.iter().find(|h| h.0.to_lowercase() == "content-length");
|
||||
assert!(
|
||||
content_length_header.is_some(),
|
||||
"Content-Length should be present without chunked encoding"
|
||||
);
|
||||
assert_eq!(
|
||||
content_length_header.unwrap().1,
|
||||
"13",
|
||||
"Content-Length should match the body size"
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
This is a test file!
|
||||
@@ -1,12 +0,0 @@
|
||||
CREATE TABLE body_chunks
|
||||
(
|
||||
id TEXT PRIMARY KEY,
|
||||
body_id TEXT NOT NULL,
|
||||
chunk_index INTEGER NOT NULL,
|
||||
data BLOB NOT NULL,
|
||||
created_at DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) NOT NULL,
|
||||
|
||||
UNIQUE (body_id, chunk_index)
|
||||
);
|
||||
|
||||
CREATE INDEX idx_body_chunks_body_id ON body_chunks (body_id, chunk_index);
|
||||
@@ -1 +0,0 @@
|
||||
ALTER TABLE settings ADD COLUMN client_certificates TEXT DEFAULT '[]' NOT NULL;
|
||||
@@ -1,15 +0,0 @@
|
||||
-- Add default User-Agent header to workspaces that don't already have one (case-insensitive check)
|
||||
UPDATE workspaces
|
||||
SET headers = json_insert(headers, '$[#]', json('{"enabled":true,"name":"User-Agent","value":"yaak"}'))
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM json_each(workspaces.headers)
|
||||
WHERE LOWER(json_extract(value, '$.name')) = 'user-agent'
|
||||
);
|
||||
|
||||
-- Add default Accept header to workspaces that don't already have one (case-insensitive check)
|
||||
UPDATE workspaces
|
||||
SET headers = json_insert(headers, '$[#]', json('{"enabled":true,"name":"Accept","value":"*/*"}'))
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM json_each(workspaces.headers)
|
||||
WHERE LOWER(json_extract(value, '$.name')) = 'accept'
|
||||
);
|
||||
@@ -1,3 +0,0 @@
|
||||
-- Add request_headers and content_length_compressed columns to http_responses table
|
||||
ALTER TABLE http_responses ADD COLUMN request_headers TEXT NOT NULL DEFAULT '[]';
|
||||
ALTER TABLE http_responses ADD COLUMN content_length_compressed INTEGER;
|
||||
@@ -1,15 +0,0 @@
|
||||
CREATE TABLE http_response_events
|
||||
(
|
||||
id TEXT NOT NULL
|
||||
PRIMARY KEY,
|
||||
model TEXT DEFAULT 'http_response_event' NOT NULL,
|
||||
workspace_id TEXT NOT NULL
|
||||
REFERENCES workspaces
|
||||
ON DELETE CASCADE,
|
||||
response_id TEXT NOT NULL
|
||||
REFERENCES http_responses
|
||||
ON DELETE CASCADE,
|
||||
created_at DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) NOT NULL,
|
||||
updated_at DATETIME DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) NOT NULL,
|
||||
event TEXT NOT NULL
|
||||
);
|
||||
@@ -1,2 +0,0 @@
|
||||
ALTER TABLE http_responses
|
||||
ADD COLUMN request_content_length INTEGER;
|
||||
@@ -1 +0,0 @@
|
||||
ALTER TABLE settings ADD COLUMN hotkeys TEXT DEFAULT '{}' NOT NULL;
|
||||
@@ -1,2 +0,0 @@
|
||||
-- Add DNS resolution timing to http_responses
|
||||
ALTER TABLE http_responses ADD COLUMN elapsed_dns INTEGER DEFAULT 0 NOT NULL;
|
||||
@@ -1,2 +0,0 @@
|
||||
-- Add DNS overrides setting to workspaces
|
||||
ALTER TABLE workspaces ADD COLUMN setting_dns_overrides TEXT DEFAULT '[]' NOT NULL;
|
||||
@@ -1,12 +0,0 @@
|
||||
-- Filter out headers that match the hardcoded defaults (User-Agent: yaak, Accept: */*),
|
||||
-- keeping any other custom headers the user may have added.
|
||||
UPDATE workspaces
|
||||
SET headers = (
|
||||
SELECT json_group_array(json(value))
|
||||
FROM json_each(headers)
|
||||
WHERE NOT (
|
||||
(LOWER(json_extract(value, '$.name')) = 'user-agent' AND json_extract(value, '$.value') = 'yaak')
|
||||
OR (LOWER(json_extract(value, '$.name')) = 'accept' AND json_extract(value, '$.value') = '*/*')
|
||||
)
|
||||
)
|
||||
WHERE json_array_length(headers) > 0;
|
||||
@@ -1,354 +0,0 @@
|
||||
use crate::error::Result;
|
||||
use crate::util::generate_prefixed_id;
|
||||
use include_dir::{Dir, include_dir};
|
||||
use log::{debug, info};
|
||||
use r2d2::Pool;
|
||||
use r2d2_sqlite::SqliteConnectionManager;
|
||||
use rusqlite::{OptionalExtension, params};
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
static BLOB_MIGRATIONS_DIR: Dir = include_dir!("$CARGO_MANIFEST_DIR/blob_migrations");
|
||||
|
||||
/// A chunk of body data stored in the blob database.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct BodyChunk {
|
||||
pub id: String,
|
||||
pub body_id: String,
|
||||
pub chunk_index: i32,
|
||||
pub data: Vec<u8>,
|
||||
}
|
||||
|
||||
impl BodyChunk {
|
||||
pub fn new(body_id: impl Into<String>, chunk_index: i32, data: Vec<u8>) -> Self {
|
||||
Self { id: generate_prefixed_id("bc"), body_id: body_id.into(), chunk_index, data }
|
||||
}
|
||||
}
|
||||
|
||||
/// Manages the blob database connection pool.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct BlobManager {
|
||||
pool: Arc<Mutex<Pool<SqliteConnectionManager>>>,
|
||||
}
|
||||
|
||||
impl BlobManager {
|
||||
pub fn new(pool: Pool<SqliteConnectionManager>) -> Self {
|
||||
Self { pool: Arc::new(Mutex::new(pool)) }
|
||||
}
|
||||
|
||||
pub fn connect(&self) -> BlobContext {
|
||||
let conn = self
|
||||
.pool
|
||||
.lock()
|
||||
.expect("Failed to gain lock on blob DB")
|
||||
.get()
|
||||
.expect("Failed to get blob DB connection from pool");
|
||||
BlobContext { conn }
|
||||
}
|
||||
}
|
||||
|
||||
/// Context for blob database operations.
|
||||
pub struct BlobContext {
|
||||
conn: r2d2::PooledConnection<SqliteConnectionManager>,
|
||||
}
|
||||
|
||||
impl BlobContext {
|
||||
/// Insert a single chunk.
|
||||
pub fn insert_chunk(&self, chunk: &BodyChunk) -> Result<()> {
|
||||
self.conn.execute(
|
||||
"INSERT INTO body_chunks (id, body_id, chunk_index, data) VALUES (?1, ?2, ?3, ?4)",
|
||||
params![chunk.id, chunk.body_id, chunk.chunk_index, chunk.data],
|
||||
)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get all chunks for a body, ordered by chunk_index.
|
||||
pub fn get_chunks(&self, body_id: &str) -> Result<Vec<BodyChunk>> {
|
||||
let mut stmt = self.conn.prepare(
|
||||
"SELECT id, body_id, chunk_index, data FROM body_chunks
|
||||
WHERE body_id = ?1 ORDER BY chunk_index ASC",
|
||||
)?;
|
||||
|
||||
let chunks = stmt
|
||||
.query_map(params![body_id], |row| {
|
||||
Ok(BodyChunk {
|
||||
id: row.get(0)?,
|
||||
body_id: row.get(1)?,
|
||||
chunk_index: row.get(2)?,
|
||||
data: row.get(3)?,
|
||||
})
|
||||
})?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
Ok(chunks)
|
||||
}
|
||||
|
||||
/// Delete all chunks for a body.
|
||||
pub fn delete_chunks(&self, body_id: &str) -> Result<()> {
|
||||
self.conn.execute("DELETE FROM body_chunks WHERE body_id = ?1", params![body_id])?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Delete all chunks matching a body_id prefix (e.g., "rs_abc123.%" to delete all bodies for a response).
|
||||
pub fn delete_chunks_like(&self, body_id_prefix: &str) -> Result<()> {
|
||||
self.conn
|
||||
.execute("DELETE FROM body_chunks WHERE body_id LIKE ?1", params![body_id_prefix])?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Get total size of a body without loading data.
|
||||
impl BlobContext {
|
||||
pub fn get_body_size(&self, body_id: &str) -> Result<usize> {
|
||||
let size: i64 = self
|
||||
.conn
|
||||
.query_row(
|
||||
"SELECT COALESCE(SUM(LENGTH(data)), 0) FROM body_chunks WHERE body_id = ?1",
|
||||
params![body_id],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap_or(0);
|
||||
Ok(size as usize)
|
||||
}
|
||||
|
||||
/// Check if a body exists.
|
||||
pub fn body_exists(&self, body_id: &str) -> Result<bool> {
|
||||
let count: i64 = self
|
||||
.conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM body_chunks WHERE body_id = ?1",
|
||||
params![body_id],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap_or(0);
|
||||
Ok(count > 0)
|
||||
}
|
||||
}
|
||||
|
||||
/// Run migrations for the blob database.
|
||||
pub fn migrate_blob_db(pool: &Pool<SqliteConnectionManager>) -> Result<()> {
|
||||
info!("Running blob database migrations");
|
||||
|
||||
// Create migrations tracking table
|
||||
pool.get()?.execute(
|
||||
"CREATE TABLE IF NOT EXISTS _blob_migrations (
|
||||
version TEXT PRIMARY KEY,
|
||||
description TEXT NOT NULL,
|
||||
applied_at DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL
|
||||
)",
|
||||
[],
|
||||
)?;
|
||||
|
||||
// Read and sort all .sql files
|
||||
let mut entries: Vec<_> = BLOB_MIGRATIONS_DIR
|
||||
.entries()
|
||||
.iter()
|
||||
.filter(|e| e.path().extension().map(|ext| ext == "sql").unwrap_or(false))
|
||||
.collect();
|
||||
|
||||
entries.sort_by_key(|e| e.path());
|
||||
|
||||
let mut ran_migrations = 0;
|
||||
for entry in &entries {
|
||||
let filename = entry.path().file_name().unwrap().to_str().unwrap();
|
||||
let version = filename.split('_').next().unwrap();
|
||||
|
||||
// Check if already applied
|
||||
let already_applied: Option<i64> = pool
|
||||
.get()?
|
||||
.query_row("SELECT 1 FROM _blob_migrations WHERE version = ?", [version], |r| r.get(0))
|
||||
.optional()?;
|
||||
|
||||
if already_applied.is_some() {
|
||||
debug!("Skipping already applied blob migration: {}", filename);
|
||||
continue;
|
||||
}
|
||||
|
||||
let sql =
|
||||
entry.as_file().unwrap().contents_utf8().expect("Failed to read blob migration file");
|
||||
|
||||
info!("Applying blob migration: {}", filename);
|
||||
let conn = pool.get()?;
|
||||
conn.execute_batch(sql)?;
|
||||
|
||||
// Record migration
|
||||
conn.execute(
|
||||
"INSERT INTO _blob_migrations (version, description) VALUES (?, ?)",
|
||||
params![version, filename],
|
||||
)?;
|
||||
|
||||
ran_migrations += 1;
|
||||
}
|
||||
|
||||
if ran_migrations == 0 {
|
||||
info!("No blob migrations to run");
|
||||
} else {
|
||||
info!("Ran {} blob migration(s)", ran_migrations);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn create_test_pool() -> Pool<SqliteConnectionManager> {
|
||||
let manager = SqliteConnectionManager::memory();
|
||||
let pool = Pool::builder().max_size(1).build(manager).unwrap();
|
||||
migrate_blob_db(&pool).unwrap();
|
||||
pool
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_insert_and_get_chunks() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
let body_id = "rs_test123.request";
|
||||
let chunk1 = BodyChunk::new(body_id, 0, b"Hello, ".to_vec());
|
||||
let chunk2 = BodyChunk::new(body_id, 1, b"World!".to_vec());
|
||||
|
||||
ctx.insert_chunk(&chunk1).unwrap();
|
||||
ctx.insert_chunk(&chunk2).unwrap();
|
||||
|
||||
let chunks = ctx.get_chunks(body_id).unwrap();
|
||||
assert_eq!(chunks.len(), 2);
|
||||
assert_eq!(chunks[0].chunk_index, 0);
|
||||
assert_eq!(chunks[0].data, b"Hello, ");
|
||||
assert_eq!(chunks[1].chunk_index, 1);
|
||||
assert_eq!(chunks[1].data, b"World!");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_chunks_ordered_by_index() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
let body_id = "rs_test123.request";
|
||||
|
||||
// Insert out of order
|
||||
ctx.insert_chunk(&BodyChunk::new(body_id, 2, b"C".to_vec())).unwrap();
|
||||
ctx.insert_chunk(&BodyChunk::new(body_id, 0, b"A".to_vec())).unwrap();
|
||||
ctx.insert_chunk(&BodyChunk::new(body_id, 1, b"B".to_vec())).unwrap();
|
||||
|
||||
let chunks = ctx.get_chunks(body_id).unwrap();
|
||||
assert_eq!(chunks.len(), 3);
|
||||
assert_eq!(chunks[0].data, b"A");
|
||||
assert_eq!(chunks[1].data, b"B");
|
||||
assert_eq!(chunks[2].data, b"C");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_delete_chunks() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
let body_id = "rs_test123.request";
|
||||
ctx.insert_chunk(&BodyChunk::new(body_id, 0, b"data".to_vec())).unwrap();
|
||||
|
||||
assert!(ctx.body_exists(body_id).unwrap());
|
||||
|
||||
ctx.delete_chunks(body_id).unwrap();
|
||||
|
||||
assert!(!ctx.body_exists(body_id).unwrap());
|
||||
assert_eq!(ctx.get_chunks(body_id).unwrap().len(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_delete_chunks_like() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
// Insert chunks for same response but different body types
|
||||
ctx.insert_chunk(&BodyChunk::new("rs_abc.request", 0, b"req".to_vec())).unwrap();
|
||||
ctx.insert_chunk(&BodyChunk::new("rs_abc.response", 0, b"resp".to_vec())).unwrap();
|
||||
ctx.insert_chunk(&BodyChunk::new("rs_other.request", 0, b"other".to_vec())).unwrap();
|
||||
|
||||
// Delete all bodies for rs_abc
|
||||
ctx.delete_chunks_like("rs_abc.%").unwrap();
|
||||
|
||||
// rs_abc bodies should be gone
|
||||
assert!(!ctx.body_exists("rs_abc.request").unwrap());
|
||||
assert!(!ctx.body_exists("rs_abc.response").unwrap());
|
||||
|
||||
// rs_other should still exist
|
||||
assert!(ctx.body_exists("rs_other.request").unwrap());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_body_size() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
let body_id = "rs_test123.request";
|
||||
ctx.insert_chunk(&BodyChunk::new(body_id, 0, b"Hello".to_vec())).unwrap();
|
||||
ctx.insert_chunk(&BodyChunk::new(body_id, 1, b"World".to_vec())).unwrap();
|
||||
|
||||
let size = ctx.get_body_size(body_id).unwrap();
|
||||
assert_eq!(size, 10); // "Hello" + "World" = 10 bytes
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_body_size_empty() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
let size = ctx.get_body_size("nonexistent").unwrap();
|
||||
assert_eq!(size, 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_body_exists() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
assert!(!ctx.body_exists("rs_test.request").unwrap());
|
||||
|
||||
ctx.insert_chunk(&BodyChunk::new("rs_test.request", 0, b"data".to_vec())).unwrap();
|
||||
|
||||
assert!(ctx.body_exists("rs_test.request").unwrap());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_multiple_bodies_isolated() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
ctx.insert_chunk(&BodyChunk::new("body1", 0, b"data1".to_vec())).unwrap();
|
||||
ctx.insert_chunk(&BodyChunk::new("body2", 0, b"data2".to_vec())).unwrap();
|
||||
|
||||
let chunks1 = ctx.get_chunks("body1").unwrap();
|
||||
let chunks2 = ctx.get_chunks("body2").unwrap();
|
||||
|
||||
assert_eq!(chunks1.len(), 1);
|
||||
assert_eq!(chunks1[0].data, b"data1");
|
||||
assert_eq!(chunks2.len(), 1);
|
||||
assert_eq!(chunks2[0].data, b"data2");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_large_chunk() {
|
||||
let pool = create_test_pool();
|
||||
let manager = BlobManager::new(pool);
|
||||
let ctx = manager.connect();
|
||||
|
||||
// 1MB chunk
|
||||
let large_data: Vec<u8> = (0..1024 * 1024).map(|i| (i % 256) as u8).collect();
|
||||
let body_id = "rs_large.request";
|
||||
|
||||
ctx.insert_chunk(&BodyChunk::new(body_id, 0, large_data.clone())).unwrap();
|
||||
|
||||
let chunks = ctx.get_chunks(body_id).unwrap();
|
||||
assert_eq!(chunks.len(), 1);
|
||||
assert_eq!(chunks[0].data, large_data);
|
||||
assert_eq!(ctx.get_body_size(body_id).unwrap(), 1024 * 1024);
|
||||
}
|
||||
}
|
||||
@@ -1,100 +0,0 @@
|
||||
use crate::blob_manager::{BlobManager, migrate_blob_db};
|
||||
use crate::error::{Error, Result};
|
||||
use crate::migrate::migrate_db;
|
||||
use crate::query_manager::QueryManager;
|
||||
use crate::util::ModelPayload;
|
||||
use log::info;
|
||||
use r2d2::Pool;
|
||||
use r2d2_sqlite::SqliteConnectionManager;
|
||||
use std::fs::create_dir_all;
|
||||
use std::path::Path;
|
||||
use std::sync::mpsc;
|
||||
use std::time::Duration;
|
||||
|
||||
pub mod blob_manager;
|
||||
mod connection_or_tx;
|
||||
pub mod db_context;
|
||||
pub mod error;
|
||||
pub mod migrate;
|
||||
pub mod models;
|
||||
pub mod queries;
|
||||
pub mod query_manager;
|
||||
pub mod render;
|
||||
pub mod util;
|
||||
|
||||
/// Initialize the database managers for standalone (non-Tauri) usage.
|
||||
///
|
||||
/// Returns a tuple of (QueryManager, BlobManager, event_receiver).
|
||||
/// The event_receiver can be used to listen for model change events.
|
||||
pub fn init_standalone(
|
||||
db_path: impl AsRef<Path>,
|
||||
blob_path: impl AsRef<Path>,
|
||||
) -> Result<(QueryManager, BlobManager, mpsc::Receiver<ModelPayload>)> {
|
||||
let db_path = db_path.as_ref();
|
||||
let blob_path = blob_path.as_ref();
|
||||
|
||||
// Create parent directories if needed
|
||||
if let Some(parent) = db_path.parent() {
|
||||
create_dir_all(parent)?;
|
||||
}
|
||||
if let Some(parent) = blob_path.parent() {
|
||||
create_dir_all(parent)?;
|
||||
}
|
||||
|
||||
// Main database pool
|
||||
info!("Initializing app database {db_path:?}");
|
||||
let manager = SqliteConnectionManager::file(db_path);
|
||||
let pool = Pool::builder()
|
||||
.max_size(100)
|
||||
.connection_timeout(Duration::from_secs(10))
|
||||
.build(manager)
|
||||
.map_err(|e| Error::Database(e.to_string()))?;
|
||||
|
||||
migrate_db(&pool)?;
|
||||
|
||||
info!("Initializing blobs database {blob_path:?}");
|
||||
|
||||
// Blob database pool
|
||||
let blob_manager = SqliteConnectionManager::file(blob_path);
|
||||
let blob_pool = Pool::builder()
|
||||
.max_size(50)
|
||||
.connection_timeout(Duration::from_secs(10))
|
||||
.build(blob_manager)
|
||||
.map_err(|e| Error::Database(e.to_string()))?;
|
||||
|
||||
migrate_blob_db(&blob_pool)?;
|
||||
|
||||
let (tx, rx) = mpsc::channel();
|
||||
let query_manager = QueryManager::new(pool, tx);
|
||||
let blob_manager = BlobManager::new(blob_pool);
|
||||
|
||||
Ok((query_manager, blob_manager, rx))
|
||||
}
|
||||
|
||||
/// Initialize the database managers with in-memory SQLite databases.
|
||||
/// Useful for testing and CI environments.
|
||||
pub fn init_in_memory() -> Result<(QueryManager, BlobManager, mpsc::Receiver<ModelPayload>)> {
|
||||
// Main database pool
|
||||
let manager = SqliteConnectionManager::memory();
|
||||
let pool = Pool::builder()
|
||||
.max_size(1) // In-memory DB doesn't support multiple connections
|
||||
.build(manager)
|
||||
.map_err(|e| Error::Database(e.to_string()))?;
|
||||
|
||||
migrate_db(&pool)?;
|
||||
|
||||
// Blob database pool
|
||||
let blob_manager = SqliteConnectionManager::memory();
|
||||
let blob_pool = Pool::builder()
|
||||
.max_size(1)
|
||||
.build(blob_manager)
|
||||
.map_err(|e| Error::Database(e.to_string()))?;
|
||||
|
||||
migrate_blob_db(&blob_pool)?;
|
||||
|
||||
let (tx, rx) = mpsc::channel();
|
||||
let query_manager = QueryManager::new(pool, tx);
|
||||
let blob_manager = BlobManager::new(blob_pool);
|
||||
|
||||
Ok((query_manager, blob_manager, rx))
|
||||
}
|
||||
@@ -1,18 +0,0 @@
|
||||
use crate::db_context::DbContext;
|
||||
use crate::error::Result;
|
||||
use crate::models::{HttpResponseEvent, HttpResponseEventIden};
|
||||
use crate::util::UpdateSource;
|
||||
|
||||
impl<'a> DbContext<'a> {
|
||||
pub fn list_http_response_events(&self, response_id: &str) -> Result<Vec<HttpResponseEvent>> {
|
||||
self.find_many(HttpResponseEventIden::ResponseId, response_id, None)
|
||||
}
|
||||
|
||||
pub fn upsert_http_response_event(
|
||||
&self,
|
||||
http_response_event: &HttpResponseEvent,
|
||||
source: &UpdateSource,
|
||||
) -> Result<HttpResponseEvent> {
|
||||
self.upsert(http_response_event, source)
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user