diff --git a/content/docs/meta.json b/content/docs/meta.json
index 11eceb3..08ef7a8 100644
--- a/content/docs/meta.json
+++ b/content/docs/meta.json
@@ -47,6 +47,15 @@
"user-guide/retention",
"user-guide/smart-cache",
"api",
+ "---pb CLI---",
+ "pb-cli/index",
+ "pb-cli/install",
+ "pb-cli/connect",
+ "pb-cli/query",
+ "pb-cli/promql",
+ "pb-cli/tail",
+ "pb-cli/datasets",
+ "pb-cli/admin",
"---Self Hosted---",
"self-hosted/installation",
"self-hosted/configuration",
diff --git a/content/docs/pb-cli/admin.mdx b/content/docs/pb-cli/admin.mdx
new file mode 100644
index 0000000..1b38b67
--- /dev/null
+++ b/content/docs/pb-cli/admin.mdx
@@ -0,0 +1,95 @@
+---
+title: "Users and Roles"
+description: "Manage users and role-based access control with pb user and pb role."
+---
+
+
+These commands require admin privileges on the Parseable server.
+
+
+Parseable uses role-based access control (RBAC). Roles define what a user can do on which datasets. You create roles first, then assign them to users.
+
+## Roles
+
+### Create a role
+
+```bash
+pb role add
+```
+
+This starts an interactive prompt where you choose a privilege level and the dataset it applies to.
+
+**Privilege levels:**
+
+| Privilege | Access |
+|---|---|
+| `reader` | Read-only access to a specific dataset |
+| `writer` | Read and write access to a specific dataset |
+| `ingestor` | Write-only (ingest) access to a specific dataset |
+| `admin` | Full system-wide access |
+
+```bash
+pb role add log-readers
+# → select privilege: reader
+# → select dataset: backend_logs
+```
+
+### List roles
+
+```bash
+pb role list
+
+# JSON output
+pb role list --output json
+```
+
+### Delete a role
+
+```bash
+pb role remove
+```
+
+## Users
+
+### Create a user
+
+```bash
+pb user add
+```
+
+The server generates a password and prints it. Save it — it cannot be retrieved later.
+
+```bash
+# Create a user and assign roles immediately
+pb user add bob --role log-readers,developers
+```
+
+### List users
+
+```bash
+pb user list
+
+# JSON output
+pb user list --output json
+```
+
+### Update user roles
+
+```bash
+pb user set-role
+```
+
+```bash
+# Assign (or replace) roles
+pb user set-role bob log-readers,admins
+```
+
+### Delete a user
+
+```bash
+pb user remove
+```
+
+
+Roles must exist before they can be assigned to a user. Create roles with `pb role add` first.
+
diff --git a/content/docs/pb-cli/connect.mdx b/content/docs/pb-cli/connect.mdx
new file mode 100644
index 0000000..a416a86
--- /dev/null
+++ b/content/docs/pb-cli/connect.mdx
@@ -0,0 +1,91 @@
+---
+title: "Connecting to a Server"
+description: "Create a profile and authenticate with your Parseable instance."
+---
+
+`pb` stores server connections as named **profiles**. A profile holds the server URL and credentials for one Parseable instance. You can have multiple profiles and switch between them, useful when working with separate staging and production deployments.
+
+## Interactive login (recommended)
+
+Run `pb login` to start the interactive wizard. It walks you through selecting a server type, entering your URL, choosing an authentication method, and saving the profile.
+
+```bash
+pb login
+```
+
+The wizard prompts you for:
+- Server type (self-hosted or cloud)
+- Server URL (e.g. `https://logs.mycompany.com`)
+- Auth method — username/password or API key
+- Credentials
+- Profile name (defaults to `default`)
+
+Once complete, the profile is saved and set as the active default.
+
+## Non-interactive login
+
+For scripts and CI pipelines, use `pb profile add` to create a profile without prompts:
+
+```bash
+pb profile add
+```
+
+```bash
+# Example
+pb profile add production https://logs.mycompany.com admin s3cr3t
+
+# With an API key instead of username/password
+pb profile add ci https://logs.mycompany.com --token myapikey
+```
+
+## Managing profiles
+
+```bash
+# List all saved profiles
+pb profile list
+
+# Switch the active default profile
+pb profile default staging
+
+# Update a profile's server URL
+pb profile update production https://new-url.mycompany.com
+
+# Remove a profile
+pb profile remove old-staging
+```
+
+## Check connection status
+
+```bash
+pb status
+```
+
+This validates that the active profile can reach the server and shows the server version:
+
+```
+Profile : production
+URL : https://logs.mycompany.com
+User : admin
+Status : ✓ Connected (server v1.4.2)
+```
+
+## Logout
+
+```bash
+pb logout
+```
+
+Removes credentials from the active profile. To log out of a specific profile, switch to it first with `pb profile default `, then run `pb logout`.
+
+## Config file location
+
+Profiles are stored in a TOML config file on disk:
+
+| Platform | Path |
+|---|---|
+| macOS / Linux | `~/.config/pb/config.toml` |
+| Windows | `%AppData%\pb\config.toml` |
+
+
+All commands use the active default profile automatically. Use `pb profile default ` to switch between servers without specifying a profile on every command.
+
diff --git a/content/docs/pb-cli/datasets.mdx b/content/docs/pb-cli/datasets.mdx
new file mode 100644
index 0000000..5d993fa
--- /dev/null
+++ b/content/docs/pb-cli/datasets.mdx
@@ -0,0 +1,68 @@
+---
+title: "Dataset Management"
+description: "Create, inspect, and delete datasets with pb dataset."
+---
+
+Datasets are the top-level containers for log and metrics data in Parseable. Use the `pb dataset` subcommands to manage them from the terminal.
+
+## List datasets
+
+```bash
+pb dataset list
+```
+
+Shows all datasets available on the active server.
+
+```bash
+# JSON output for scripting
+pb dataset list --output json
+```
+
+## Create a dataset
+
+```bash
+pb dataset add
+```
+
+```bash
+pb dataset add backend_logs
+pb dataset add otel_metrics
+```
+
+Dataset names must be lowercase and contain no spaces.
+
+## Inspect a dataset
+
+```bash
+pb dataset info
+```
+
+```bash
+pb dataset info backend_logs
+```
+
+Shows:
+- Event count and ingestion size
+- Storage size and compression ratio
+- Retention period (if configured)
+- Active alerts (if any)
+- Dataset type (logs or metrics)
+
+```bash
+# JSON output
+pb dataset info backend_logs --output json
+```
+
+## Delete a dataset
+
+```bash
+pb dataset remove
+```
+
+```bash
+pb dataset remove old_logs
+```
+
+
+Deleting a dataset removes all data it contains. This action cannot be undone.
+
diff --git a/content/docs/pb-cli/index.mdx b/content/docs/pb-cli/index.mdx
new file mode 100644
index 0000000..aa135a4
--- /dev/null
+++ b/content/docs/pb-cli/index.mdx
@@ -0,0 +1,60 @@
+---
+title: pb CLI
+description: A command-line interface for Parseable that lets you query logs, analyze metrics, and stream live events directly from your terminal.
+---
+
+`pb` is the official CLI for Parseable. It provides a terminal-based interface for querying log datasets with SQL, running PromQL queries on metrics, streaming live events, and managing datasets, users, and roles without opening a browser.
+
+`pb` is built for developers who work in the terminal and want full access to Parseable from shell scripts, CI pipelines, or day-to-day workflows. It supports both a standard command-line mode for scripting and automation, and a full-screen interactive TUI for exploratory work.
+
+> Currently, `pb` works with self-hosted Parseable deployments only. Cloud support is coming soon.
+
+## Commands
+
+| Command | Description |
+|---|---|
+| `pb login` | Start an interactive wizard to connect to a Parseable server and save the connection as a profile |
+| `pb profile` | List, switch, update, and remove saved server connections |
+| `pb status` | Verify the active connection and display server version information |
+| `pb query run` | Execute SQL queries against a log dataset, with support for time ranges, JSON output, and interactive TUI mode |
+| `pb query promql` | Run PromQL range and instant queries on a metrics dataset, with label exploration and cardinality analysis |
+| `pb tail` | Stream live log events from a dataset in real time as newline-delimited JSON |
+| `pb dataset` | List, create, inspect, and delete datasets on the connected server |
+| `pb user` | Create users, assign roles, and manage user access |
+| `pb role` | Define roles with specific privilege levels and dataset scopes |
+
+## Interactive mode
+
+Both SQL log queries and PromQL metrics queries support an interactive full-screen TUI using the `-i` flag. In this mode, `pb` opens a panel-based interface where you can write and edit your query, adjust the time range, and browse through paginated results this all without leaving the terminal.
+
+```bash
+# Open the interactive SQL query interface
+pb query run -i
+
+# Open the interactive PromQL interface
+pb query run -i --promql
+#or
+pb query promql run -i
+```
+
+Navigate between panels using `Tab` and `Shift+Tab`. Press `Ctrl+R` to run the query. Results are displayed in a scrollable table with column navigation and inline row filtering.
+
+## Output formats
+
+Every `pb` command that returns data supports `--output json` for machine-readable output. The default output is a human-readable table.
+
+```bash
+# Human-readable table (default)
+pb dataset list
+
+# JSON output for scripting or piping to jq
+pb dataset list --output json
+pb query run "SELECT level, count(*) FROM logs GROUP BY level" --from 1h --output json
+```
+
+## Getting started
+
+1. [Install pb](/docs/pb-cli/install) — download the binary for macOS, Linux, or Windows
+2. [Connect to a server](/docs/pb-cli/connect) — authenticate and save a server profile
+3. [Run a SQL query](/docs/pb-cli/query) — query a log dataset with a time range
+4. [Run a PromQL query](/docs/pb-cli/promql) — query a metrics dataset
diff --git a/content/docs/pb-cli/install.mdx b/content/docs/pb-cli/install.mdx
new file mode 100644
index 0000000..fb91c2a
--- /dev/null
+++ b/content/docs/pb-cli/install.mdx
@@ -0,0 +1,92 @@
+---
+title: "Installation"
+description: "Download and install pb on macOS, Linux, or Windows."
+---
+
+## macOS
+
+First, check which chip your Mac has. Click the Apple menu → **About This Mac**.
+
+- If it says **Apple M1 / M2 / M3** → use the `arm64` binary
+- If it says **Intel** → use the `amd64` binary
+
+```bash
+# Apple Silicon (M1, M2, M3)
+curl -LO https://github.com/parseablehq/pb/releases/latest/download/pb_darwin_arm64
+chmod +x pb_darwin_arm64
+sudo mv pb_darwin_arm64 /usr/local/bin/pb
+```
+
+```bash
+# Intel
+curl -LO https://github.com/parseablehq/pb/releases/latest/download/pb_darwin_amd64
+chmod +x pb_darwin_amd64
+sudo mv pb_darwin_amd64 /usr/local/bin/pb
+```
+
+## Linux
+
+Not sure which architecture your machine is? Run `uname -m` in your terminal.
+
+- `x86_64` → use `amd64`
+- `aarch64` → use `arm64`
+
+```bash
+# x86 64-bit (amd64)
+curl -LO https://github.com/parseablehq/pb/releases/latest/download/pb_linux_amd64
+chmod +x pb_linux_amd64
+sudo mv pb_linux_amd64 /usr/local/bin/pb
+```
+
+```bash
+# ARM 64-bit (arm64)
+curl -LO https://github.com/parseablehq/pb/releases/latest/download/pb_linux_arm64
+chmod +x pb_linux_arm64
+sudo mv pb_linux_arm64 /usr/local/bin/pb
+```
+
+## Windows
+
+Download `pb_windows_amd64` from the [releases page](https://github.com/parseablehq/pb/releases), rename it to `pb.exe`, and move it to a folder that is in your system `PATH`.
+
+## Using Go
+
+If you have Go 1.21 or later installed, you can install `pb` directly with:
+
+```bash
+go install github.com/parseablehq/pb@latest
+```
+
+This builds `pb` from source and places it in your `$GOPATH/bin`. Make sure `$GOPATH/bin` is in your `PATH`.
+
+## Verify installation
+
+After installing, run:
+
+```bash
+pb --version
+```
+
+You should see the version number and build commit printed to the terminal. If you get a `command not found` error, check that the folder where `pb` was moved is included in your `PATH`.
+
+## Shell autocomplete
+
+`pb` can generate completion scripts so your shell auto-completes commands and flags. Run this once after installation.
+
+**bash**
+```bash
+pb autocomplete bash > /etc/bash_completion.d/pb
+source /etc/bash_completion.d/pb
+```
+
+**zsh**
+```bash
+pb autocomplete zsh > /usr/local/share/zsh/site-functions/_pb
+autoload -Uz compinit && compinit
+```
+
+**PowerShell**
+```powershell
+pb autocomplete powershell > $env:USERPROFILE\Documents\PowerShell\pb_complete.ps1
+. $env:USERPROFILE\Documents\PowerShell\pb_complete.ps1
+```
diff --git a/content/docs/pb-cli/meta.json b/content/docs/pb-cli/meta.json
new file mode 100644
index 0000000..42a65e8
--- /dev/null
+++ b/content/docs/pb-cli/meta.json
@@ -0,0 +1,15 @@
+{
+ "title": "pb CLI",
+ "description": "Command-line interface for Parseable",
+ "defaultOpen": true,
+ "pages": [
+ "index",
+ "install",
+ "connect",
+ "query",
+ "promql",
+ "tail",
+ "datasets",
+ "admin"
+ ]
+}
diff --git a/content/docs/pb-cli/promql.mdx b/content/docs/pb-cli/promql.mdx
new file mode 100644
index 0000000..1af9b6b
--- /dev/null
+++ b/content/docs/pb-cli/promql.mdx
@@ -0,0 +1,85 @@
+---
+title: "PromQL Queries"
+description: "Query metrics datasets in Parseable using PromQL."
+---
+
+Parseable can store metrics data ingested via OpenTelemetry or Prometheus remote write. The `pb query promql` subcommands let you query, explore, and analyze those metrics from the terminal.
+
+## Run a PromQL query
+
+```bash
+pb query promql run "" --dataset --from
+```
+
+```bash
+# Range query: rate of HTTP requests over the last hour, 1-minute resolution
+pb query promql run "rate(http_requests_total[5m])" --dataset otel_metrics --from 1h --step 1m
+
+# Instant query: current value only
+pb query promql run "up" --dataset otel_metrics --instant
+```
+
+### Flags
+
+| Flag | Default | Description |
+|---|---|---|
+| `--dataset` / `-d` | — | Metrics dataset to query (required) |
+| `--from` / `-f` | `5m` | Start of query window |
+| `--to` / `-t` | `now` | End of query window |
+| `--step` | `1m` | Resolution step for range queries |
+| `--instant` | false | Run as instant query instead of range |
+| `--output` / `-o` | `text` | Output format: `text` or `json` |
+| `--interactive` / `-i` | false | Open interactive TUI |
+
+## Interactive TUI
+
+```bash
+pb query promql run -i
+```
+
+The TUI has panels for Dataset, Query expression, Time Range, Step size, and Results. Navigate with `Tab` / `Shift+Tab`, run with `Ctrl+R`.
+
+## Explore labels and series
+
+```bash
+# List all label names in a metrics dataset
+pb query promql labels --dataset otel_metrics
+
+# List all values for a specific label
+pb query promql label-values job --dataset otel_metrics
+
+# Find series matching a selector
+pb query promql series --match 'http_requests_total' --dataset otel_metrics
+pb query promql series --match '{job="api-server"}' --dataset otel_metrics
+```
+
+## Cardinality analysis
+
+High cardinality labels are a common cause of performance issues. These commands help you find them.
+
+```bash
+# Labels with the most distinct values
+pb query promql cardinality label-names --dataset otel_metrics --limit 20
+
+# Series count per value for a specific label
+pb query promql cardinality label-values --label job --dataset otel_metrics
+
+# Currently active series
+pb query promql cardinality active-series --dataset otel_metrics --selector '{job="api"}'
+```
+
+## TSDB statistics
+
+```bash
+pb query promql tsdb --dataset otel_metrics --top 10
+```
+
+Shows storage statistics for the metrics dataset, broken down by the top N series by size.
+
+## Active queries
+
+```bash
+pb query promql active-queries
+```
+
+Lists any PromQL queries currently running on the server. Useful for debugging slow or stuck queries.
diff --git a/content/docs/pb-cli/query.mdx b/content/docs/pb-cli/query.mdx
new file mode 100644
index 0000000..944d388
--- /dev/null
+++ b/content/docs/pb-cli/query.mdx
@@ -0,0 +1,82 @@
+---
+title: "SQL Queries"
+description: "Run SQL queries against your log datasets using pb query run."
+---
+
+`pb query run` lets you query any dataset with SQL. You can specify a time range, save queries for reuse, and open an interactive TUI for exploratory work.
+
+## Basic usage
+
+```bash
+pb query run "SELECT * FROM backend" --from 10m --to now
+```
+
+`--from` and `--to` define the time window to query. The dataset name is the SQL `FROM` target.
+
+## Time range syntax
+
+| Format | Example | Meaning |
+|---|---|---|
+| Relative | `10m`, `2h`, `7d` | Last N minutes / hours / days |
+| Absolute | `2024-01-15T00:00:00Z` | Exact RFC3339 timestamp |
+| Keyword | `now` | Current time |
+
+```bash
+# Last 30 minutes
+pb query run "SELECT * FROM logs LIMIT 100" --from 30m
+
+# Specific time window
+pb query run "SELECT count(*) FROM logs" --from 2024-01-01T00:00:00Z --to 2024-01-02T00:00:00Z
+
+# Default: last 1 minute
+pb query run "SELECT * FROM logs"
+```
+
+## JSON output
+
+Use `--output json` to get structured output suitable for scripting:
+
+```bash
+pb query run "SELECT * FROM logs LIMIT 50" --from 1h --output json
+```
+
+## Save and reuse queries
+
+Save a query for later with `--save-as`:
+
+```bash
+pb query run "SELECT level, count(*) FROM logs GROUP BY level" --from 1h --save-as error-summary
+```
+
+List and manage saved queries:
+
+```bash
+pb query list
+```
+
+This opens an interactive menu where you can apply or delete saved queries.
+
+## Interactive TUI mode
+
+Add `-i` to open the full-screen interactive query interface:
+
+```bash
+pb query run -i
+```
+
+The TUI has three panels - Query, Time Range, and Results Table. Navigate between them with `Tab` / `Shift+Tab`.
+
+| Key | Action |
+|---|---|
+| `Tab` / `Shift+Tab` | Switch between panels |
+| `Ctrl+R` | Run query |
+| `↑` / `↓` | Scroll rows |
+| `Shift+↑` / `Shift+↓` | Page up / down |
+| `←` / `→` | Scroll columns |
+| `/` | Filter table rows |
+| `Esc` | Clear filter |
+| `Ctrl+B` | Previous page of results |
+
+
+Field names containing dots (OTel convention, e.g. `service.name`) or hyphens are automatically quoted in queries. You don't need to add quotes manually.
+
diff --git a/content/docs/pb-cli/tail.mdx b/content/docs/pb-cli/tail.mdx
new file mode 100644
index 0000000..1e67c58
--- /dev/null
+++ b/content/docs/pb-cli/tail.mdx
@@ -0,0 +1,72 @@
+---
+title: "Live Tail"
+description: "Stream log events from a dataset in real time with pb tail."
+---
+
+`pb tail` connects to a dataset and streams new events as they arrive. It is designed for real-time monitoring, watching an ingestion pipeline, debugging a running service, or confirming that data is flowing.
+
+## Usage
+
+```bash
+pb tail
+```
+
+```bash
+# Watch a dataset called "backend"
+pb tail backend
+```
+
+Events are printed as newline-delimited JSON, one event per line. Press `Ctrl+C` to stop.
+
+```
+● watching backend... (ctrl+c to stop)
+{"level":"info","msg":"request received","service":"api","ts":"2024-01-15T10:23:01Z"}
+{"level":"error","msg":"db timeout","service":"api","ts":"2024-01-15T10:23:02Z"}
+```
+
+## Piping to jq
+
+Because output is newline-delimited JSON, you can pipe directly to `jq` for filtering and formatting:
+
+```bash
+# Pretty-print all events
+pb tail backend | jq .
+
+# Show only error-level events
+pb tail backend | jq 'select(.level == "error")'
+
+# Extract specific fields
+pb tail backend | jq '{time: .ts, msg: .msg}'
+```
+
+## Piping to grep
+
+```bash
+# Show only lines containing "timeout"
+pb tail backend | grep timeout
+
+# Exclude health check noise
+pb tail backend | grep -v "health_check"
+```
+
+## How it works
+
+`pb tail` uses Apache Arrow Flight over gRPC for streaming. This means it needs access to **two ports** on your Parseable server, not just the main HTTP port:
+
+| Port | Protocol | Purpose |
+|---|---|---|
+| `8000` | HTTP | Main Parseable API (login, query, datasets) |
+| `8001` | gRPC | Arrow Flight streaming (used by `pb tail`) |
+
+## Troubleshooting
+
+**Error: `rpc error: code = Unavailable ... dial tcp :8001: i/o timeout`**
+
+This means `pb tail` connected to the server successfully over HTTP, but could not reach the gRPC port on `8001`. This is the most common error when using `pb tail`.
+
+Possible causes:
+- Port `8001` is blocked by a firewall on the server
+- Port `8001` is not exposed in your Docker or Kubernetes setup
+- Your network restricts outbound gRPC connections
+
+To fix it, make sure port `8001` is open and reachable from your machine. If you are running Parseable in Docker, add `-p 8001:8001` to your run command. If you are behind a firewall, allow outbound TCP on port `8001`.