cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
PSA: Community Edition retires on January 1, 2026. Move to the Free Edition today to keep your work.

Databricks Free Edition is the new home for personal learning and exploration on Databricks. It’s perpetually free and built on modern Databricks - the same Data Intelligence Platform used by professionals. Free Edition lets you learn professional da...

  • 968 Views
  • 0 replies
  • 4 kudos
2 weeks ago
🎤 Call for Presentations: Data + AI Summit 2026 is Open!

June 15–18, 2026 Are you building the future with data and AI? Then this is your moment. The Call for Proposals for Data + AI Summit 2026 is officially open, and we want to hear from builders, practitioners, and innovators across the data and AI com...

  • 1345 Views
  • 4 replies
  • 6 kudos
2 weeks ago
Last Chance: Help Shape the 2026 Data + AI Summit | Win a Full Conference Pass

Your voice matters to us. We are planning the 2026 Data + AI Summit, and we’d love your input on what would make the experience even more valuable for you. Take a few minutes to share your feedback through our quick survey — your insights directly in...

  • 631 Views
  • 3 replies
  • 4 kudos
2 weeks ago
Level Up with Databricks Specialist Sessions

How to Register & Prepare If you're interested in advancing your skills with Databricks through a Specialist Session, here's a clear guide on how to register and what free courses you can take to prepare effectively. How to Begin Your Learning Path S...

  • 3240 Views
  • 2 replies
  • 9 kudos
10-02-2025
Celebrating Our First Brickster Champion: Louis Frolio

Our Champion program has always celebrated the customers who go above and beyond to engage, help others, and uplift the Community. Recently, we have seen remarkable participation from Bricksters as well—and their impact deserves recognition too. Begi...

  • 1378 Views
  • 7 replies
  • 14 kudos
11-21-2025
🌟 Community Pulse: Your Weekly Roundup! December 12 – 21, 2025

Learning doesn’t pause, and neither does the impact this Community continues to create!Across threads and time zones, the knowledge kept moving. Catch up on the highlights   Voices Shaping the Week        Featuring the voices that brought clarity, ...

  • 153 Views
  • 0 replies
  • 1 kudos
Tuesday

Community Activity

Suheb
by > Contributor
  • 101 Views
  • 4 replies
  • 2 kudos

How do I improve the performance of my Random Forest model on Databricks?

How can I make these people smarter or faster so the final answer is better?

  • 101 Views
  • 4 replies
  • 2 kudos
Latest Reply
jameswood32
Contributor
  • 2 kudos

Improving the performance of a Random Forest model on Databricks is usually about data quality, feature engineering, and hyperparameter tuning. Some tips:Feature Engineering:Create meaningful features and remove irrelevant ones.Encode categorical var...

  • 2 kudos
3 More Replies
jasmin_mbi
by > New Contributor
  • 154 Views
  • 2 replies
  • 1 kudos

Impossible to create classic warehouse

Hello,we have already spent surprisingly many DBUs, although we have only uploaded a few tiny tables (9 Tables with approx. 10 lines).We had the idea to change the warehouse from serverless starter warehouse to classic 2x small in order to save DBUs....

  • 154 Views
  • 2 replies
  • 1 kudos
Latest Reply
Advika
Community Manager
  • 1 kudos

Hello @jasmin_mbi! Did the suggestion shared above help resolve the issue with creating a classic SQL warehouse? If yes, please consider marking the response as the accepted solution.

  • 1 kudos
1 More Replies
anujsen18
by Contributor
  • 81 Views
  • 0 replies
  • 1 kudos

[PARTNER BLOG] Zerobus Ingest on Databricks

Introduction TL;DR ZeroBus Ingest is a serverless, Kafka-free ingestion service in Databricks that allows applications and IoT devices to stream data directly into Delta Lake with low latency and minimal operational overhead. Real-time data ingestion...

anujsen18_0-1766508538746.png anujsen18_6-1766504369845.png anujsen18_0-1766502911635.png anujsen18_1-1766502911638.png
  • 81 Views
  • 0 replies
  • 1 kudos
Sunil_Patidar
by > New Contributor II
  • 185 Views
  • 2 replies
  • 1 kudos

Unable to read from or write to Snowflake Open Catalog via Databricks

I have Snowflake Iceberg tables whose metadata is stored in Snowflake Open Catalog. I am trying to read these tables from the Open Catalog and write back to the Open Catalog using Databricks.I have explored the available documentation but haven’t bee...

  • 185 Views
  • 2 replies
  • 1 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 1 kudos

Greetings @Sunil_Patidar ,  Databricks and Snowflake can interoperate cleanly around Iceberg today — but how you do it matters. At a high level, interoperability works because both platforms meet at Apache Iceberg and the Iceberg REST Catalog API. Wh...

  • 1 kudos
1 More Replies
Advika
by Community Manager
  • 153 Views
  • 0 replies
  • 2 kudos

🌟 Community Pulse: Your Weekly Roundup! December 12 – 21, 2025

Learning doesn’t pause, and neither does the impact this Community continues to create!Across threads and time zones, the knowledge kept moving. Catch up on the highlights   Voices Shaping the Week        Featuring the voices that brought clarity, ...

Screenshot 2025-11-07 at 8.12.47 PM.png
  • 153 Views
  • 0 replies
  • 2 kudos
ciaran
by > New Contributor
  • 42 Views
  • 1 replies
  • 0 kudos

Is GCP Workload Identity Federation supported for BigQuery connections in Azure Databricks?

I’m trying to set up a BigQuery connection in Azure Databricks (Unity Catalog / Lakehouse Federation) using GCP Workload Identity Federation (WIF) instead of a GCP service account keyEnvironment:Azure Databricks workspaceBigQuery query federation via...

  • 42 Views
  • 1 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 0 kudos

I guess that it is only one accepted as doc say "Google service account key json"

  • 0 kudos
Hubert-Dudek
by Databricks MVP
  • 33 Views
  • 0 replies
  • 1 kudos

Confluence Lakeflow Connector

Incrementally upload data from Confluence. I remember there were a few times in my life when I spent weeks on it. Now, it is incredible how simple it is to implement it with Lakeflow Connect. Additionally, I love DABS's first approach for connectors,...

confluence.png
  • 33 Views
  • 0 replies
  • 1 kudos
pavelhym
by > New Contributor
  • 48 Views
  • 1 replies
  • 1 kudos

Usage of MLFlow models inside Streamlit app in Databricks

I have an issue with loading registered MLflow model into streamlit app inside the DatabricksThis is the sample code used for model load:import mlflowfrom mlflow.tracking import MlflowClientmlflow.set_tracking_uri("databricks")mlflow.set_registry_uri...

  • 48 Views
  • 1 replies
  • 1 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 1 kudos

Authentication context isn’t automatically available in Apps. Notebooks automatically inject workspace host and token for mlflow when you use mlflow.set_tracking_uri("databricks") and mlflow.set_registry_uri("databricks-uc"). In Databricks Apps, you ...

  • 1 kudos
Suheb
by > Contributor
  • 28 Views
  • 1 replies
  • 1 kudos

How do I implement and train a custom PyTorch model on Databricks using distributed training?

How can I build my own PyTorch machine-learning model and train it faster on Databricks by using multiple machines/GPUs instead of just one?

  • 28 Views
  • 1 replies
  • 1 kudos
Latest Reply
KaushalVachhani
Databricks Employee
  • 1 kudos

@Suheb , You may look at the torch distributor. It provides multiple distributed training options, including single-node with multiple-GPU training and multi-node training. Below are the references for you. https://docs.databricks.com/aws/en/machine-...

  • 1 kudos
JothyGanesan
by > New Contributor III
  • 72 Views
  • 2 replies
  • 4 kudos

Resolved! Vacuum on DLT

We are currently using DLT tables in our target tables. The tables are getting loaded in continuous job pipelines.The liquid cluster is enabled in the tables. Will Vacuum work on these tables when it is getting loaded in continuous mode? How to run t...

  • 72 Views
  • 2 replies
  • 4 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 4 kudos

VACUUM works fine on DLT tables running in continuous mode. DLT does automatic maintenance (OPTIMIZE + VACUUM) roughly every 24 hours if the pipeline has a maintenance cluster configured. Q: The liquid cluster is enabled in the tables. Will Vacuum wo...

  • 4 kudos
1 More Replies
RodrigoE
by > New Contributor II
  • 54 Views
  • 2 replies
  • 0 kudos

Vector search index very slow

Hello,I have created a vector search index for a delta table with 1,400 rows. Using this vector index to find matching records on a table with 52M records with the query below ran for 20hrs and failed with: 'HTTP request failed with status: {"error_c...

Machine Learning
vector search index
  • 54 Views
  • 2 replies
  • 0 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 0 kudos

Hi @RodrigoE ,Your LATERAL subquery calls the Vector Search function once for every row of the 52M-row table, which results in tens of millions of remote calls to the Vector Search endpoint—this is not a nice pattern and will be extremely slow leadin...

  • 0 kudos
1 More Replies
DylanStout
by > Contributor
  • 220 Views
  • 4 replies
  • 1 kudos

Catalog tag filter error

When trying to filter in the catalog on "Tag", it throws an error that it failed to load values:The other filters do load:I have tried it with different computes and I have a view that has a tag (as shown in the screenshot).I have the following privi...

DylanStout_0-1764581602517.png DylanStout_1-1764581693590.png DylanStout_2-1764581879449.png
  • 220 Views
  • 4 replies
  • 1 kudos
Latest Reply
Advika
Community Manager
  • 1 kudos

Hello @DylanStout! Did the suggestions shared above help resolve your concern? If so, please consider marking the response as the accepted solution.If you found a different approach that worked, sharing it would be helpful for others in the community...

  • 1 kudos
3 More Replies
D_Science
by > New Contributor
  • 66 Views
  • 1 replies
  • 1 kudos

Local LLM's available in Databricks for email classification

Hello everyone,I am currently working on an email classification model in Azure Databricks. Since I work for an international company, the emails contain PII data. Because of this, I need to be very careful about compliance and data privacy, especial...

  • 66 Views
  • 1 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi, It is absolutely acceptable. Here are some details that you may want to consider. I'd also think about GPU availability in your cloud and region and whether there is GPU available for you to deploy these models to. You should be able to easily te...

  • 1 kudos
TFV
by > New Contributor
  • 103 Views
  • 1 replies
  • 1 kudos

Regression: Dashboard slicer paste now commits invalid filter values instead of searching

Hi Team,We appear to be experiencing a recent regression in the AI/BI dashboard filter slicer behaviour.Steps to reproduceOpen a dashboard containing a single-select or multi-select filter slicer.Click into the slicer’s text input.Paste text from the...

  • 103 Views
  • 1 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi Tim, I can't find any mention of this internally. But I suspect it will be related to this change  Multi-select filter paste: Viewers can now copy a column of values from a spreadsheet and paste them into a multi-select filter. My recommendation w...

  • 1 kudos
sher_1222
by > New Contributor
  • 112 Views
  • 3 replies
  • 0 kudos

Data Ingestions errors

I was going to ingestion Data from website to databricks but it is showing Public DBFS is not enableb message. is there any other way to automate data ingestion to databricks?

  • 112 Views
  • 3 replies
  • 0 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 0 kudos

Hi @sher_1222 yes you can upload to cloud storage and then connect using unity catalog: Connect to cloud object storage using Unity Catalog - Azure Databricks | Microsoft Learnand then use What is Auto Loader? | Databricks on AWS to automatically ing...

  • 0 kudos
2 More Replies
Welcome to the Databricks Community!

Once you are logged in, you will be ready to post content, ask questions, participate in discussions, earn badges and more.

Spend a few minutes exploring Get Started Resources, Learning Paths, Certifications, and Platform Discussions.

Connect with peers through User Groups and stay updated by subscribing to Events. We are excited to see you engage!

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Top Kudoed Authors
Read Databricks Data Intelligence Platform reviews on G2

Latest from our Blog

[PARTNER BLOG] Zerobus Ingest on Databricks

Introduction TL;DR ZeroBus Ingest is a serverless, Kafka-free ingestion service in Databricks that allows applications and IoT devices to stream data directly into Delta Lake with low latency and mini...

Image
81Views 1kudos