Skip to content

Conversation

@openinx
Copy link
Collaborator

@openinx openinx commented Jan 7, 2026

Which Delta project/connector is this regarding?

  • Spark
  • Standalone
  • Flink
  • Kernel
  • Other (fill in here)

Description

Port the python UnityCatalogManagedTableReadSuite into the spark+uc integration tests.

About the test_streaming_read, it will be covered separately in the UCDeltaStreamingTest ( https://github.com/delta-io/delta/pull/5719/files ) . So let's just ignore it here.

How was this patch tested?

  • Locally test
build/sbt "sparkUnityCatalog/testOnly io.sparkuctest.UCDeltaTableDMLTest"
  • Remote UC test
export UC_REMOTE=true
export UC_URI=$UC_URI
export UC_CATALOG_NAME=main
export UC_SCHEMA_NAME=demo_zh
export UC_BASE_TABLE_LOCATION=$S3_BASE_LOCATION

build/sbt "sparkUnityCatalog/testOnly io.sparkuctest.UCDeltaTableDMLTest"

Does this PR introduce any user-facing changes?

No

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR ports the Python UnityCatalogManagedTableReadSuite integration tests to the Spark+UC Java integration test suite. It adds comprehensive test coverage for read operations on Unity Catalog Delta tables, including time travel queries and Change Data Feed (CDF) functionality.

Key changes:

  • Adds UCDeltaTableReadTest with parameterized tests for both EXTERNAL and MANAGED table types
  • Tests time travel READ operations using VERSION AS OF and TIMESTAMP AS OF syntax
  • Tests CDF READ operations with both version and timestamp parameters, with proper handling of catalog-managed vs external table differences

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +56 to +63
List<List<String>> versionResult1 =
sql("SELECT * FROM %s VERSION AS OF %d ORDER BY id", tableName, currentVersion);
check(versionResult1, List.of(List.of("1"), List.of("2"), List.of("3")));

// Test VERSION AS OF with SQL syntax
List<List<String>> versionResult2 =
sql("SELECT * FROM %s VERSION AS OF %d ORDER BY id", tableName, currentVersion);
check(versionResult2, List.of(List.of("1"), List.of("2"), List.of("3")));
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 56-58 and 61-63 contain duplicate code that performs the same operation. Both blocks execute identical VERSION AS OF queries and checks. The comment on line 55 says "Test VERSION AS OF with DataFrameReader API" but the code uses SQL syntax, and line 60 says "Test VERSION AS OF with SQL syntax" which is the same as the previous block. One of these duplicate blocks should be removed, or if testing different APIs is intended, the DataFrameReader API test should actually use the DataFrame API instead of SQL.

Copilot uses AI. Check for mistakes.
Comment on lines +55 to +64
// Test VERSION AS OF with DataFrameReader API
List<List<String>> versionResult1 =
sql("SELECT * FROM %s VERSION AS OF %d ORDER BY id", tableName, currentVersion);
check(versionResult1, List.of(List.of("1"), List.of("2"), List.of("3")));

// Test VERSION AS OF with SQL syntax
List<List<String>> versionResult2 =
sql("SELECT * FROM %s VERSION AS OF %d ORDER BY id", tableName, currentVersion);
check(versionResult2, List.of(List.of("1"), List.of("2"), List.of("3")));

Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable name 'versionResult1' followed by 'versionResult2' suggests these are meant to test different scenarios or produce different results, but both variables store identical query results. Consider using more descriptive names that reflect what each test is validating, or removing the duplicate test if only one is needed.

Suggested change
// Test VERSION AS OF with DataFrameReader API
List<List<String>> versionResult1 =
sql("SELECT * FROM %s VERSION AS OF %d ORDER BY id", tableName, currentVersion);
check(versionResult1, List.of(List.of("1"), List.of("2"), List.of("3")));
// Test VERSION AS OF with SQL syntax
List<List<String>> versionResult2 =
sql("SELECT * FROM %s VERSION AS OF %d ORDER BY id", tableName, currentVersion);
check(versionResult2, List.of(List.of("1"), List.of("2"), List.of("3")));
// Test VERSION AS OF
List<List<String>> versionAsOfResult =
sql("SELECT * FROM %s VERSION AS OF %d ORDER BY id", tableName, currentVersion);
check(versionAsOfResult, List.of(List.of("1"), List.of("2"), List.of("3")));

Copilot uses AI. Check for mistakes.
Comment on lines +107 to +108
message.toLowerCase().contains("path based access")
|| message.toLowerCase().contains("catalog-managed"),
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message validation uses 'message.toLowerCase().contains(...)' for some checks but 'message.contains(...)' for others without converting to lowercase. This inconsistency could lead to fragile tests if the error message case changes. Consider either consistently using toLowerCase() for all case-insensitive checks, or using case-sensitive exact matches if the message format is guaranteed.

Suggested change
message.toLowerCase().contains("path based access")
|| message.toLowerCase().contains("catalog-managed"),
message.contains("path based access")
|| message.contains("catalog-managed"),

Copilot uses AI. Check for mistakes.
Comment on lines +157 to +159
Assertions.assertTrue(
message.contains("UPDATE_DELTA_METADATA")
|| message.toLowerCase().contains("catalog-managed"),
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message validation pattern is inconsistent with the one used in testChangeDataFeedWithTimestamp. Here, 'UPDATE_DELTA_METADATA' is checked with case-sensitive contains() while 'catalog-managed' uses case-insensitive toLowerCase().contains(). Consider using a consistent approach across both methods for better maintainability.

Suggested change
Assertions.assertTrue(
message.contains("UPDATE_DELTA_METADATA")
|| message.toLowerCase().contains("catalog-managed"),
String lowerMessage = message == null ? "" : message.toLowerCase();
Assertions.assertTrue(
lowerMessage.contains("update_delta_metadata")
|| lowerMessage.contains("catalog-managed"),

Copilot uses AI. Check for mistakes.
Comment on lines +207 to +210
Assertions.assertTrue(
message.contains("AccessDeniedException")
|| message.toLowerCase().contains("access denied")
|| message.toLowerCase().contains("catalog-managed"),
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message validation pattern is inconsistent across the test methods. This check uses 'AccessDeniedException' (case-sensitive), 'access denied' (case-insensitive), and 'catalog-managed' (case-insensitive). Consider standardizing the approach to either always use case-sensitive or case-insensitive matching for better code consistency.

Suggested change
Assertions.assertTrue(
message.contains("AccessDeniedException")
|| message.toLowerCase().contains("access denied")
|| message.toLowerCase().contains("catalog-managed"),
String messageLower = message.toLowerCase();
Assertions.assertTrue(
messageLower.contains("accessdeniedexception")
|| messageLower.contains("access denied")
|| messageLower.contains("catalog-managed"),

Copilot uses AI. Check for mistakes.
/**
* Read operation test suite for Delta Table operations through Unity Catalog.
*
* <p>Covers time travel, change data feed, streaming, and path-based access scenarios. Tests are
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The class documentation mentions 'streaming' as one of the covered scenarios, but there are no streaming test methods in this class. According to the PR description, streaming tests will be covered separately in UCDeltaStreamingTest. The documentation should be updated to remove the reference to streaming to accurately reflect what this test class covers.

Suggested change
* <p>Covers time travel, change data feed, streaming, and path-based access scenarios. Tests are
* <p>Covers time travel, change data feed, and path-based access scenarios. Tests are

Copilot uses AI. Check for mistakes.
@openinx
Copy link
Collaborator Author

openinx commented Jan 7, 2026

The failures are :

[info] Test io.sparkuctest.UCDeltaTableReadTest#testTimeTravelRead(io.sparkuctest.UCDeltaTableIntegrationBaseTest$TableType):#1 started
[error] Test io.sparkuctest.UCDeltaTableReadTest.testTimeTravelRead failed: org.apache.spark.sql.catalyst.parser.ParseException: 
[error] [PARSE_SYNTAX_ERROR] Syntax error at or near 'unity': missing ')'. SQLSTATE: 42601 (line 1, pos 43)
[error] 
[error] == SQL ==
[error] SELECT max(version) FROM (DESCRIBE HISTORY unity.default.time_travel_test)
[error] -------------------------------------------^^^
[error] , took 3.746s
[error]     at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(parsers.scala:285)
[error]     at org.apache.spark.sql.catalyst.parser.AbstractParser.parse(parsers.scala:97)
[error]     at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:54)
[error]     at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(AbstractSqlParser.scala:93)
[error]     at io.delta.sql.parser.DeltaSqlParser.$anonfun$parsePlan$1(DeltaSqlParser.scala:84)
[error]     at io.delta.sql.parser.DeltaSqlParser.parse(DeltaSqlParser.scala:117)
[error]     at io.delta.sql.parser.DeltaSqlParser.parsePlan(DeltaSqlParser.scala:79)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$5(SparkSession.scala:492)
[error]     at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:148)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$4(SparkSession.scala:491)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:490)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:504)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:513)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:91)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest$SparkSQLExecutor.runSQL(UCDeltaTableIntegrationBaseTest.java:251)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.sql(UCDeltaTableIntegrationBaseTest.java:125)
[error]     at io.sparkuctest.UCDeltaTableReadTest.lambda$testTimeTravelRead$0(UCDeltaTableReadTest.java:45)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.lambda$withNewTable$0(UCDeltaTableIntegrationBaseTest.java:175)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withTempDir(UCDeltaTableIntegrationBaseTest.java:142)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withNewTable(UCDeltaTableIntegrationBaseTest.java:167)
[error]     at io.sparkuctest.UCDeltaTableReadTest.testTimeTravelRead(UCDeltaTableReadTest.java:35)
[error]     at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error]     at java.lang.reflect.Method.invoke(Method.java:580)
[error]     ...
[info] Test io.sparkuctest.UCDeltaTableReadTest#testTimeTravelRead(io.sparkuctest.UCDeltaTableIntegrationBaseTest$TableType):#2 started
[error] Test io.sparkuctest.UCDeltaTableReadTest.testTimeTravelRead failed: org.apache.spark.sql.catalyst.parser.ParseException: 
[error] [PARSE_SYNTAX_ERROR] Syntax error at or near 'unity': missing ')'. SQLSTATE: 42601 (line 1, pos 43)
[error] 
[error] == SQL ==
[error] SELECT max(version) FROM (DESCRIBE HISTORY unity.default.time_travel_test)
[error] -------------------------------------------^^^
[error] , took 0.862s
[error]     at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(parsers.scala:285)
[error]     at org.apache.spark.sql.catalyst.parser.AbstractParser.parse(parsers.scala:97)
[error]     at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:54)
[error]     at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(AbstractSqlParser.scala:93)
[error]     at io.delta.sql.parser.DeltaSqlParser.$anonfun$parsePlan$1(DeltaSqlParser.scala:84)
[error]     at io.delta.sql.parser.DeltaSqlParser.parse(DeltaSqlParser.scala:117)
[error]     at io.delta.sql.parser.DeltaSqlParser.parsePlan(DeltaSqlParser.scala:79)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$5(SparkSession.scala:492)
[error]     at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:148)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$4(SparkSession.scala:491)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:490)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:504)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:513)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:91)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest$SparkSQLExecutor.runSQL(UCDeltaTableIntegrationBaseTest.java:251)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.sql(UCDeltaTableIntegrationBaseTest.java:125)
[error]     at io.sparkuctest.UCDeltaTableReadTest.lambda$testTimeTravelRead$0(UCDeltaTableReadTest.java:45)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withNewTable(UCDeltaTableIntegrationBaseTest.java:189)
[error]     at io.sparkuctest.UCDeltaTableReadTest.testTimeTravelRead(UCDeltaTableReadTest.java:35)
[error]     at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error]     at java.lang.reflect.Method.invoke(Method.java:580)
[error]     ...
[info] Test io.sparkuctest.UCDeltaTableReadTest#testDeltaTableForPath(io.sparkuctest.UCDeltaTableIntegrationBaseTest$TableType):#1 started
[error] Test io.sparkuctest.UCDeltaTableReadTest.testDeltaTableForPath failed: org.apache.spark.sql.catalyst.parser.ParseException: 
[error] [PARSE_SYNTAX_ERROR] Syntax error at or near 'unity': missing ')'. SQLSTATE: 42601 (line 1, pos 40)
[error] 
[error] == SQL ==
[error] SELECT location FROM (DESCRIBE EXTENDED unity.default.delta_table_for_path_test) WHERE col_name = 'Location'
[error] ----------------------------------------^^^
[error] , took 0.727s
[error]     at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(parsers.scala:285)
[error]     at org.apache.spark.sql.catalyst.parser.AbstractParser.parse(parsers.scala:97)
[error]     at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:54)
[error]     at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(AbstractSqlParser.scala:93)
[error]     at io.delta.sql.parser.DeltaSqlParser.$anonfun$parsePlan$1(DeltaSqlParser.scala:84)
[error]     at io.delta.sql.parser.DeltaSqlParser.parse(DeltaSqlParser.scala:117)
[error]     at io.delta.sql.parser.DeltaSqlParser.parsePlan(DeltaSqlParser.scala:79)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$5(SparkSession.scala:492)
[error]     at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:148)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$4(SparkSession.scala:491)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:490)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:504)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:513)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:91)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest$SparkSQLExecutor.runSQL(UCDeltaTableIntegrationBaseTest.java:251)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.sql(UCDeltaTableIntegrationBaseTest.java:125)
[error]     at io.sparkuctest.UCDeltaTableReadTest.lambda$testDeltaTableForPath$9(UCDeltaTableReadTest.java:187)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.lambda$withNewTable$0(UCDeltaTableIntegrationBaseTest.java:175)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withTempDir(UCDeltaTableIntegrationBaseTest.java:142)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withNewTable(UCDeltaTableIntegrationBaseTest.java:167)
[error]     at io.sparkuctest.UCDeltaTableReadTest.testDeltaTableForPath(UCDeltaTableReadTest.java:177)
[error]     at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error]     at java.lang.reflect.Method.invoke(Method.java:580)
[error]     ...
[info] Test io.sparkuctest.UCDeltaTableReadTest#testDeltaTableForPath(io.sparkuctest.UCDeltaTableIntegrationBaseTest$TableType):#2 started
[error] Test io.sparkuctest.UCDeltaTableReadTest.testDeltaTableForPath failed: org.apache.spark.sql.catalyst.parser.ParseException: 
[error] [PARSE_SYNTAX_ERROR] Syntax error at or near 'unity': missing ')'. SQLSTATE: 42601 (line 1, pos 40)
[error] 
[error] == SQL ==
[error] SELECT location FROM (DESCRIBE EXTENDED unity.default.delta_table_for_path_test) WHERE col_name = 'Location'
[error] ----------------------------------------^^^
[error] , took 0.728s
[error]     at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(parsers.scala:285)
[error]     at org.apache.spark.sql.catalyst.parser.AbstractParser.parse(parsers.scala:97)
[error]     at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:54)
[error]     at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(AbstractSqlParser.scala:93)
[error]     at io.delta.sql.parser.DeltaSqlParser.$anonfun$parsePlan$1(DeltaSqlParser.scala:84)
[error]     at io.delta.sql.parser.DeltaSqlParser.parse(DeltaSqlParser.scala:117)
[error]     at io.delta.sql.parser.DeltaSqlParser.parsePlan(DeltaSqlParser.scala:79)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$5(SparkSession.scala:492)
[error]     at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:148)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$4(SparkSession.scala:491)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:490)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:504)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:513)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:91)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest$SparkSQLExecutor.runSQL(UCDeltaTableIntegrationBaseTest.java:251)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.sql(UCDeltaTableIntegrationBaseTest.java:125)
[error]     at io.sparkuctest.UCDeltaTableReadTest.lambda$testDeltaTableForPath$9(UCDeltaTableReadTest.java:187)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withNewTable(UCDeltaTableIntegrationBaseTest.java:189)
[error]     at io.sparkuctest.UCDeltaTableReadTest.testDeltaTableForPath(UCDeltaTableReadTest.java:177)
[error]     at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error]     at java.lang.reflect.Method.invoke(Method.java:580)
[error]     ...
[info] Test io.sparkuctest.UCDeltaTableReadTest#testChangeDataFeedWithVersion(io.sparkuctest.UCDeltaTableIntegrationBaseTest$TableType):#1 started
[error] Test io.sparkuctest.UCDeltaTableReadTest.testChangeDataFeedWithVersion failed: java.lang.UnsupportedOperationException: Altering a table is not supported yet, took 0.286s
[error]     at io.unitycatalog.spark.UCSingleCatalog.alterTable(UCSingleCatalog.scala:198)
[error]     at io.unitycatalog.spark.UCSingleCatalog.alterTable(UCSingleCatalog.scala:31)
[error]     at org.apache.spark.sql.execution.datasources.v2.AlterTableExec.run(AlterTableExec.scala:38)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$2(QueryExecution.scala:155)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$8(SQLExecution.scala:163)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withSessionTagsApplied(SQLExecution.scala:272)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$7(SQLExecution.scala:125)
[error]     at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:112)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.withClassLoaderIfNeeded(ArtifactManager.scala:106)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:111)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:125)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:295)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:124)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:78)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:237)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:155)
[error]     at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:654)
[error]     at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$eagerlyExecute$1(QueryExecution.scala:154)
[error]     at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$3.applyOrElse(QueryExecution.scala:169)
[error]     at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$3.applyOrElse(QueryExecution.scala:164)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:470)
[error]     at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:86)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:470)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:360)
[error]     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:356)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:446)
[error]     at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:164)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyCommandExecuted$1(QueryExecution.scala:126)
[error]     at scala.util.Try$.apply(Try.scala:217)
[error]     at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1378)
[error]     at org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1439)
[error]     at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
[error]     at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:131)
[error]     at org.apache.spark.sql.classic.Dataset.<init>(Dataset.scala:277)
[error]     at org.apache.spark.sql.classic.Dataset$.$anonfun$ofRows$5(Dataset.scala:140)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.Dataset$.ofRows(Dataset.scala:136)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$4(SparkSession.scala:499)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:490)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:504)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:513)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:91)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest$SparkSQLExecutor.runSQL(UCDeltaTableIntegrationBaseTest.java:251)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.sql(UCDeltaTableIntegrationBaseTest.java:125)
[error]     at io.sparkuctest.UCDeltaTableReadTest.lambda$testChangeDataFeedWithVersion$6(UCDeltaTableReadTest.java:132)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.lambda$withNewTable$0(UCDeltaTableIntegrationBaseTest.java:175)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withTempDir(UCDeltaTableIntegrationBaseTest.java:142)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withNewTable(UCDeltaTableIntegrationBaseTest.java:167)
[error]     at io.sparkuctest.UCDeltaTableReadTest.testChangeDataFeedWithVersion(UCDeltaTableReadTest.java:126)
[error]     at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error]     at java.lang.reflect.Method.invoke(Method.java:580)
[error]     ...
[info] Test io.sparkuctest.UCDeltaTableReadTest#testChangeDataFeedWithVersion(io.sparkuctest.UCDeltaTableIntegrationBaseTest$TableType):#2 started
[error] Test io.sparkuctest.UCDeltaTableReadTest.testChangeDataFeedWithVersion failed: java.lang.UnsupportedOperationException: Altering a table is not supported yet, took 0.312s
[error]     at io.unitycatalog.spark.UCSingleCatalog.alterTable(UCSingleCatalog.scala:198)
[error]     at io.unitycatalog.spark.UCSingleCatalog.alterTable(UCSingleCatalog.scala:31)
[error]     at org.apache.spark.sql.execution.datasources.v2.AlterTableExec.run(AlterTableExec.scala:38)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$2(QueryExecution.scala:155)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$8(SQLExecution.scala:163)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withSessionTagsApplied(SQLExecution.scala:272)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$7(SQLExecution.scala:125)
[error]     at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:112)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.withClassLoaderIfNeeded(ArtifactManager.scala:106)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:111)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:125)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:295)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:124)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:78)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:237)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:155)
[error]     at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:654)
[error]     at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$eagerlyExecute$1(QueryExecution.scala:154)
[error]     at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$3.applyOrElse(QueryExecution.scala:169)
[error]     at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$3.applyOrElse(QueryExecution.scala:164)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:470)
[error]     at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:86)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:470)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:360)
[error]     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:356)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:446)
[error]     at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:164)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyCommandExecuted$1(QueryExecution.scala:126)
[error]     at scala.util.Try$.apply(Try.scala:217)
[error]     at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1378)
[error]     at org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1439)
[error]     at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
[error]     at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:131)
[error]     at org.apache.spark.sql.classic.Dataset.<init>(Dataset.scala:277)
[error]     at org.apache.spark.sql.classic.Dataset$.$anonfun$ofRows$5(Dataset.scala:140)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.Dataset$.ofRows(Dataset.scala:136)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$4(SparkSession.scala:499)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:490)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:504)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:513)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:91)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest$SparkSQLExecutor.runSQL(UCDeltaTableIntegrationBaseTest.java:251)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.sql(UCDeltaTableIntegrationBaseTest.java:125)
[error]     at io.sparkuctest.UCDeltaTableReadTest.lambda$testChangeDataFeedWithVersion$6(UCDeltaTableReadTest.java:132)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withNewTable(UCDeltaTableIntegrationBaseTest.java:189)
[error]     at io.sparkuctest.UCDeltaTableReadTest.testChangeDataFeedWithVersion(UCDeltaTableReadTest.java:126)
[error]     at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error]     at java.lang.reflect.Method.invoke(Method.java:580)
[error]     ...
[info] Test io.sparkuctest.UCDeltaTableReadTest#testChangeDataFeedWithTimestamp(io.sparkuctest.UCDeltaTableIntegrationBaseTest$TableType):#1 started
[error] Test io.sparkuctest.UCDeltaTableReadTest.testChangeDataFeedWithTimestamp failed: java.lang.UnsupportedOperationException: Altering a table is not supported yet, took 0.309s
[error]     at io.unitycatalog.spark.UCSingleCatalog.alterTable(UCSingleCatalog.scala:198)
[error]     at io.unitycatalog.spark.UCSingleCatalog.alterTable(UCSingleCatalog.scala:31)
[error]     at org.apache.spark.sql.execution.datasources.v2.AlterTableExec.run(AlterTableExec.scala:38)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$2(QueryExecution.scala:155)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$8(SQLExecution.scala:163)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withSessionTagsApplied(SQLExecution.scala:272)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$7(SQLExecution.scala:125)
[error]     at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:112)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.withClassLoaderIfNeeded(ArtifactManager.scala:106)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:111)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:125)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:295)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:124)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:78)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:237)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:155)
[error]     at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:654)
[error]     at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$eagerlyExecute$1(QueryExecution.scala:154)
[error]     at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$3.applyOrElse(QueryExecution.scala:169)
[error]     at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$3.applyOrElse(QueryExecution.scala:164)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:470)
[error]     at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:86)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:470)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:360)
[error]     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:356)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:446)
[error]     at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:164)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyCommandExecuted$1(QueryExecution.scala:126)
[error]     at scala.util.Try$.apply(Try.scala:217)
[error]     at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1378)
[error]     at org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1439)
[error]     at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
[error]     at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:131)
[error]     at org.apache.spark.sql.classic.Dataset.<init>(Dataset.scala:277)
[error]     at org.apache.spark.sql.classic.Dataset$.$anonfun$ofRows$5(Dataset.scala:140)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.Dataset$.ofRows(Dataset.scala:136)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$4(SparkSession.scala:499)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:490)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:504)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:513)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:91)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest$SparkSQLExecutor.runSQL(UCDeltaTableIntegrationBaseTest.java:251)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.sql(UCDeltaTableIntegrationBaseTest.java:125)
[error]     at io.sparkuctest.UCDeltaTableReadTest.lambda$testChangeDataFeedWithTimestamp$3(UCDeltaTableReadTest.java:81)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.lambda$withNewTable$0(UCDeltaTableIntegrationBaseTest.java:175)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withTempDir(UCDeltaTableIntegrationBaseTest.java:142)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withNewTable(UCDeltaTableIntegrationBaseTest.java:167)
[error]     at io.sparkuctest.UCDeltaTableReadTest.testChangeDataFeedWithTimestamp(UCDeltaTableReadTest.java:75)
[error]     at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error]     at java.lang.reflect.Method.invoke(Method.java:580)
[error]     ...
[info] Test io.sparkuctest.UCDeltaTableReadTest#testChangeDataFeedWithTimestamp(io.sparkuctest.UCDeltaTableIntegrationBaseTest$TableType):#2 started
[error] Test io.sparkuctest.UCDeltaTableReadTest.testChangeDataFeedWithTimestamp failed: java.lang.UnsupportedOperationException: Altering a table is not supported yet, took 0.297s
[error]     at io.unitycatalog.spark.UCSingleCatalog.alterTable(UCSingleCatalog.scala:198)
[error]     at io.unitycatalog.spark.UCSingleCatalog.alterTable(UCSingleCatalog.scala:31)
[error]     at org.apache.spark.sql.execution.datasources.v2.AlterTableExec.run(AlterTableExec.scala:38)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
[error]     at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$2(QueryExecution.scala:155)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$8(SQLExecution.scala:163)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withSessionTagsApplied(SQLExecution.scala:272)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$7(SQLExecution.scala:125)
[error]     at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:112)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.withClassLoaderIfNeeded(ArtifactManager.scala:106)
[error]     at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:111)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:125)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:295)
[error]     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:124)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:78)
[error]     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:237)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:155)
[error]     at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:654)
[error]     at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$eagerlyExecute$1(QueryExecution.scala:154)
[error]     at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$3.applyOrElse(QueryExecution.scala:169)
[error]     at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$3.applyOrElse(QueryExecution.scala:164)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:470)
[error]     at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:86)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:470)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:360)
[error]     at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:356)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:37)
[error]     at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:446)
[error]     at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:164)
[error]     at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyCommandExecuted$1(QueryExecution.scala:126)
[error]     at scala.util.Try$.apply(Try.scala:217)
[error]     at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1378)
[error]     at org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1439)
[error]     at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
[error]     at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:131)
[error]     at org.apache.spark.sql.classic.Dataset.<init>(Dataset.scala:277)
[error]     at org.apache.spark.sql.classic.Dataset$.$anonfun$ofRows$5(Dataset.scala:140)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.Dataset$.ofRows(Dataset.scala:136)
[error]     at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$4(SparkSession.scala:499)
[error]     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:490)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:504)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:513)
[error]     at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:91)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest$SparkSQLExecutor.runSQL(UCDeltaTableIntegrationBaseTest.java:251)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.sql(UCDeltaTableIntegrationBaseTest.java:125)
[error]     at io.sparkuctest.UCDeltaTableReadTest.lambda$testChangeDataFeedWithTimestamp$3(UCDeltaTableReadTest.java:81)
[error]     at io.sparkuctest.UCDeltaTableIntegrationBaseTest.withNewTable(UCDeltaTableIntegrationBaseTest.java:189)
[error]     at io.sparkuctest.UCDeltaTableReadTest.testChangeDataFeedWithTimestamp(UCDeltaTableReadTest.java:75)
[error]     at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[error]     at java.lang.reflect.Method.invoke(Method.java:580)
[error]     ...
[info] Test run finished: 8 failed, 0 ignored, 8 total, 13.726s
[error] Failed: Total 8, Failed 8, Errors 0, Passed 0
[error] Failed tests:
[error] 	io.sparkuctest.UCDeltaTableReadTest
[error] (sparkUnityCatalog / Test / testOnly) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 58 s, completed Jan 6, 2026, 9:24:48 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant