Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions docs-mintlify/admin/deployment/oidc/aws.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,21 @@ deployment's default identity is the simplest place to put this.
</Step>
</Steps>

<Warning>

OIDC only covers Cube's **read** side of the export bucket. The data
warehouse itself (Snowflake, Redshift, Athena, BigQuery, …) runs the
`UNLOAD` that writes objects to the bucket, and the warehouse cannot
federate with Cube's OIDC issuer. You still need to provide **separate
credentials for the `UNLOAD`** so the warehouse can write to S3 — typically
an AWS access key pair or a warehouse-side storage integration / IAM role
— via the standard export bucket env vars (e.g.
`CUBEJS_DB_EXPORT_BUCKET_AWS_KEY` and `CUBEJS_DB_EXPORT_BUCKET_AWS_SECRET`,
or the driver-specific storage-integration variables). OIDC then handles
Cube's download of the unloaded objects from the bucket.

</Warning>

## Cube Store CSPS bucket

Cube Store CSPS lets you store pre-aggregations in your own S3 bucket.
Expand Down
15 changes: 15 additions & 0 deletions docs-mintlify/admin/deployment/oidc/azure.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,21 @@ Contributor** on the storage account.
</Step>
</Steps>

<Warning>

OIDC only covers Cube's **read** side of the export bucket. The data
warehouse itself (Snowflake on Azure, Synapse, …) runs the `UNLOAD` that
writes objects to Blob Storage, and the warehouse cannot federate with
Cube's OIDC issuer. You still need to provide **separate credentials for
the unload** so the warehouse can write to the container — typically a
storage account key, SAS token, or a warehouse-side storage integration —
via the standard export bucket env vars (e.g.
`CUBEJS_DB_EXPORT_BUCKET_AZURE_KEY`, or the driver-specific
storage-integration variables). OIDC then handles Cube's download of the
unloaded objects from the bucket.

</Warning>

## Scaling past 20 federated credentials

A single app registration accepts at most **20 federated credentials**.
Expand Down
15 changes: 15 additions & 0 deletions docs-mintlify/admin/deployment/oidc/gcp.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -261,6 +261,21 @@ deployment's service account read / write access to the bucket.
</Step>
</Steps>

<Warning>

OIDC only covers Cube's **read** side of the export bucket. The data
warehouse itself (BigQuery, Snowflake on GCP, …) runs the `UNLOAD` /
`EXPORT DATA` that writes objects to the bucket, and the warehouse cannot
federate with Cube's OIDC issuer. You still need to provide **separate
credentials for the unload** so the warehouse can write to GCS — typically
an HMAC key pair or a warehouse-side service-account integration — via the
standard export bucket env vars (e.g.
`CUBEJS_DB_EXPORT_GCS_CREDENTIALS`, or the driver-specific
storage-integration variables). OIDC then handles Cube's download of the
unloaded objects from the bucket.

</Warning>

## Direct federation

If you'd rather skip the service account impersonation hop, grant
Expand Down
4 changes: 3 additions & 1 deletion docs-mintlify/admin/deployment/oidc/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,9 @@ You can use OIDC workload identity to authenticate to:
- **Data sources** — AWS Athena, Redshift, BigQuery, Snowflake, and any other
driver that supports federated credentials.
- **Export buckets** — S3 and GCS buckets used for `EXPORT_BUCKET` pre-aggregation
unloads.
unloads. OIDC covers Cube's download of the unloaded objects; the warehouse's
`UNLOAD` write still needs its own credentials configured on the client side
— see the per-cloud guides for details.
- **Cube Store CSPS** — a per-deployment S3 / GCS bucket that holds your
Cube Store pre-aggregations (Customer-Supplied Pre-aggregation Storage).
- **Bring-your-own LLM providers** — AWS Bedrock, Google Vertex AI, and Azure
Expand Down
13 changes: 8 additions & 5 deletions packages/cubejs-backend-native/src/orchestrator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ use neon::context::{Context, FunctionContext, ModuleContext};
use neon::handle::Handle;
use neon::object::Object;
use neon::prelude::{
JsArray, JsArrayBuffer, JsBox, JsBuffer, JsFunction, JsObject, JsPromise, JsResult, JsValue,
NeonResult,
JsArray, JsArrayBuffer, JsBox, JsBuffer, JsFunction, JsObject, JsPromise, JsResult, JsString,
JsValue, NeonResult,
};
use neon::types::buffer::TypedArray;
use serde::Deserialize;
Expand Down Expand Up @@ -330,21 +330,24 @@ pub fn get_cubestore_result(mut cx: FunctionContext) -> JsResult<JsValue> {
let result = cx.argument::<JsBox<Arc<QueryResult>>>(0)?;

let js_array = cx.execute_scoped(|mut cx| {
let js_keys: Vec<Handle<JsString>> = result.members.iter().map(|k| cx.string(k)).collect();

let js_array = JsArray::new(&mut cx, result.rows.len());

for (i, row) in result.rows.iter().enumerate() {
let js_row = cx.execute_scoped(|mut cx| {
let js_row = JsObject::new(&mut cx);
for (key, value) in result.members.iter().zip(row.iter()) {
let js_key = cx.string(key);

for (js_key, value) in js_keys.iter().zip(row.iter()) {
let js_value: Handle<'_, JsValue> = match value {
DBResponsePrimitive::Null => cx.null().upcast(),
// For compatibility, we convert all primitives to strings
other => cx.string(other.to_string()).upcast(),
};

js_row.set(&mut cx, js_key, js_value)?;
js_row.set(&mut cx, *js_key, js_value)?;
}

Ok(js_row)
})?;

Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
mod calculation;
mod common;
mod dimension;
mod get_date_range;
mod leaf_measure;
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use super::query_tools::QueryTools;
use super::top_level_planner::TopLevelPlanner;
use super::QueryProperties;
use super::{QueryProperties, QueryPropertiesCompiler};
use crate::cube_bridge::base_query_options::BaseQueryOptions;
use crate::cube_bridge::pre_aggregation_obj::NativePreAggregationObj;
use crate::logical_plan::PreAggregationUsage;
Expand Down Expand Up @@ -61,7 +61,7 @@ impl<IT: InnerTypes> BaseQuery<IT> {
options.static_data().member_to_alias.clone(),
)?;

let request = QueryProperties::try_new(query_tools.clone(), options)?;
let request = QueryPropertiesCompiler::new(query_tools.clone()).build(options)?;

Ok(Self {
context,
Expand Down
2 changes: 2 additions & 0 deletions rust/cube/cubesqlplanner/cubesqlplanner/src/planner/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ pub mod visitor;
pub mod params_allocator;
pub mod planners;
pub mod query_properties;
pub mod query_properties_compiler;
pub mod query_tools;
pub mod sql_templates;
pub mod top_level_planner;
Expand All @@ -29,6 +30,7 @@ pub use compiler::Compiler;
pub use join_hints::JoinHints;
pub use params_allocator::ParamsAllocator;
pub use query_properties::{FullKeyAggregateMeasures, OrderByItem, QueryProperties};
pub use query_properties_compiler::QueryPropertiesCompiler;
pub use sql_call::*;
pub use symbols::*;
pub use time_dimension::*;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,13 +59,6 @@ pub struct MeasuresJoinHints {
}

impl MeasuresJoinHints {
pub fn empty() -> Self {
Self {
base_hints: JoinHints::new(),
measure_hints: vec![],
}
}

pub fn builder(query_join_hints: &JoinHints) -> MeasuresJoinHintsBuilder {
MeasuresJoinHintsBuilder {
initial_hints: query_join_hints.clone(),
Expand Down Expand Up @@ -137,16 +130,6 @@ pub struct MultiFactJoinGroups {
}

impl MultiFactJoinGroups {
pub fn empty(query_tools: Rc<QueryTools>) -> Self {
Self {
query_tools,
measures_join_hints: MeasuresJoinHints::empty(),
groups: vec![],
dimension_paths: HashMap::new(),
measure_paths: HashMap::new(),
}
}

pub fn try_new(
query_tools: Rc<QueryTools>,
measures_join_hints: MeasuresJoinHints,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ use crate::logical_plan::{pretty_print_rc, DimensionSubQuery};
use crate::physical_plan::QualifiedColumnName;
use crate::planner::collectors::collect_sub_query_dimensions;
use crate::planner::filter::FilterItem;
use crate::planner::join_hints::JoinHints;
use crate::planner::query_tools::QueryTools;
use crate::planner::QueryProperties;
use crate::planner::{MemberExpressionExpression, MemberExpressionSymbol, MemberSymbol};
Expand Down Expand Up @@ -111,26 +110,17 @@ impl DimensionSubqueryPlanner {
(vec![], vec![])
};

let sub_query_properties = QueryProperties::try_new_from_precompiled(
self.query_tools.clone(),
vec![measure.clone()], //measures,
primary_keys_dimensions.clone(),
vec![],
time_dimensions_filters,
dimensions_filters,
vec![],
vec![],
vec![],
None,
None,
true,
false,
false,
false,
Rc::new(JoinHints::new()),
true,
self.query_properties.disable_external_pre_aggregations(),
)?;
let sub_query_properties = QueryProperties::builder()
.query_tools(self.query_tools.clone())
.measures(vec![measure.clone()])
.dimensions(primary_keys_dimensions.clone())
.time_dimensions_filters(time_dimensions_filters)
.dimensions_filters(dimensions_filters)
.ignore_cumulative(true)
.disable_external_pre_aggregations(
self.query_properties.disable_external_pre_aggregations(),
)
.build()?;
let query_planner = QueryPlanner::new(sub_query_properties, self.query_tools.clone());
let sub_query = query_planner.plan()?;
let result = Rc::new(DimensionSubQuery {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ impl FullKeyAggregateQueryPlanner {
offset: self.query_properties.offset(),
limit: self.query_properties.row_limit(),
ungrouped: self.query_properties.ungrouped(),
order_by: self.query_properties.order_by().clone(),
order_by: self.query_properties.order_by().to_vec(),
}))
.source(source)
.build();
Expand Down
Loading
Loading