Troubleshooting
This guide provides practical solutions for common issues when using DB Tester. For detailed exception specifications, see Error Handling.
Quick Diagnosis
Use this checklist to identify your issue category:
| Symptom | Category | Jump to |
|---|---|---|
| "Dataset directory not found" | Data Loading | DataSetLoadException |
| "File is empty" or parse errors | Data Loading | DataSetLoadException |
| "Table name conflict detected in AUTO format mode" | Data Loading | DataSetLoadException |
| "Assertion failed: N differences" | Validation | ValidationException |
| "No default data source registered" | Configuration | DataSource Issues |
| Test runs slowly | Performance | Performance Optimization |
| Unexpected test failures | Common Mistakes | Common Mistakes |
DataSetLoadException
Directory Not Found on Classpath
Symptom:
Dataset directory not found on classpath: 'com/example/UserRepositoryTest'
Expected location: src/test/resources/com/example/UserRepositoryTestDiagnosis:
- Check that the directory exists at
src/test/resources/{package}/{TestClassName}/ - Verify the package path uses forward slashes
- Confirm the test class name matches exactly (case-sensitive)
Solution:
# Create the directory structure
mkdir -p src/test/resources/com/example/UserRepositoryTestConvention
The directory path follows {package}/{TestClassName}/ by default. To customize, configure baseDirectory in Configuration.
No Supported Files Found
Symptom:
Dataset directory exists but contains no supported data files: '/path/to/datasets'
Supported file extensions: [.csv, .tsv, .json, .yaml]
Hint: Add at least one data file (for example, TABLE_NAME.csv)...
Found files: [README.txt, notes.md]The Found files line lists all files in the directory to help diagnose the issue. The framework omits this line when the directory is empty.
Diagnosis:
- Check the
Found fileslist for files with incorrect extensions - Verify file extensions match the configured
dataFormat - Confirm files are not hidden (no
.prefix) and reside in the correct directory level
Solution:
| dataFormat Setting | Expected Extension |
|---|---|
DataFormat.AUTO (default) | .csv, .tsv, .json, .yaml |
DataFormat.CSV | .csv |
DataFormat.TSV | .tsv |
DataFormat.JSON | .json |
DataFormat.YAML | .yaml |
See Data Formats for file format details.
Empty File Error
Symptom:
File is empty: /path/to/USERS.csvSolution: Add at least a header row and one data row:
ID,NAME,EMAIL
1,Alice,alice@example.comParse Failure
Symptom:
Failed to parse file: /path/to/USERS.csvDiagnosis:
- Check for unescaped special characters (commas, quotes)
- Verify consistent column count across rows
- Check file encoding (UTF-8 recommended)
Solution:
- Escape commas in values:
"value, with comma" - Escape quotes:
"value ""with quotes""" - Use TSV format if data contains many commas
Table Name Conflict in AUTO Format Mode
Symptom:
Table name conflict detected in AUTO format mode.
The following table names are defined in multiple files with different formats:
Table 'USERS':
- USERS.csv
- USERS.yaml
Each table name must be unique across all file formats in a directory.
To resolve, remove duplicate files or specify a concrete format:
DataFormat.CSV, DataFormat.TSV, DataFormat.JSON, or DataFormat.YAMLDiagnosis: When using DataFormat.AUTO (the default), the framework loads all supported file formats from the dataset directory. If the same table name appears in multiple files with different extensions, the framework cannot determine which file to use.
Solution:
| Approach | Action |
|---|---|
| Remove duplicates | Keep only one file per table name (for example, remove USERS.yaml if USERS.csv exists) |
| Specify concrete format | Set DataFormat.CSV, DataFormat.TSV, DataFormat.JSON, or DataFormat.YAML in ConventionSettings |
// Option 1: Remove the duplicate file from the dataset directory
// Option 2: Specify a concrete format
var conventions = ConventionSettings.builder()
.dataFormat(DataFormat.CSV)
.build();See Data Formats - Automatic Format Detection for details.
Load Order File Error
Symptom:
Failed to read load order file: /path/to/load-order.txtDiagnosis: When using TableOrderingStrategy.LOAD_ORDER_FILE, the framework requires load-order.txt.
Solution: Create load-order.txt in your dataset directory:
PARENT_TABLE
CHILD_TABLE
GRANDCHILD_TABLESee Data Formats - Load Order for details.
ValidationException
Understanding YAML Output
When validation fails, DB Tester produces structured YAML:
Assertion failed: 2 differences in USERS
summary:
status: FAILED
total_differences: 2
tables:
USERS:
differences:
- path: row_count
expected: 3
actual: 2
- path: "row[0].EMAIL"
expected: john@example.com
actual: jane@example.comRow Count Mismatch
Symptom:
- path: row_count
expected: 3
actual: 2Diagnosis:
- Check
[Scenario]column filtering - Verify all expected rows exist in the CSV
- Check whether test logic deleted rows unexpectedly
Solution:
| Cause | Action |
|---|---|
Missing [Scenario] value | Add the test method name to the [Scenario] column |
| Wrong scenario name | Match exactly with the test method name |
| Extra rows filtered | Remove the [Scenario] column to load all rows |
See Data Formats - Scenario Filtering.
Cell Value Mismatch
Symptom:
- path: "row[0].EMAIL"
expected: john@example.com
actual: jane@example.comDiagnosis:
- Compare the expected CSV with the actual database state
- Check whether test logic updated the value
- Verify row ordering matches
Solution:
| Cause | Action |
|---|---|
| Row order differs | Use rowOrdering = RowOrdering.UNORDERED |
| Timestamp precision | Check the comparison strategy for date columns |
| Floating point | Values within epsilon (1e-6) match automatically |
Using excludeColumns and columnStrategies
Precedence Rule: excludeColumns takes priority over columnStrategies.
@ExpectedDataSet(sources = @DataSetSource(
excludeColumns = {"CREATED_AT"}, // Excluded first
columnStrategies = {
@ColumnStrategy(name = "UPDATED_AT", strategy = Strategy.IGNORE)
}
))In this example, CREATED_AT is excluded entirely. UPDATED_AT uses the IGNORE strategy for comparison.
See Public API for annotation details.
DataSource Issues
Default DataSource Not Registered
Symptom:
No default data source registeredDiagnosis:
- Check that the
@BeforeAllmethod signature includesExtensionContext - Verify
registerDefault()is called - Confirm no exception occurred during registration
Solution:
@BeforeAll
static void setUp(ExtensionContext context) throws SQLException {
var dataSource = new JdbcDataSource();
dataSource.setURL("jdbc:h2:mem:test;DB_CLOSE_DELAY=-1");
DatabaseTestExtension.getRegistry(context).registerDefault(dataSource);
}def setupSpec() {
def dataSource = new JdbcDataSource()
dataSource.setURL("jdbc:h2:mem:test;DB_CLOSE_DELAY=-1")
dbTesterRegistry.registerDefault(dataSource)
}override val dbTesterRegistry = DataSourceRegistry()
@BeforeAll
fun setup() {
val dataSource = JdbcDataSource().apply {
setURL("jdbc:h2:mem:test;DB_CLOSE_DELAY=-1")
}
dbTesterRegistry.registerDefault(dataSource)
}Named DataSource Not Found
Symptom:
No data source registered for name: secondary_dbSolution: Register the named DataSource:
registry.register("secondary_db", secondaryDataSource);Then reference it in annotations:
@DataSet(sources = @DataSetSource(dataSourceName = "secondary_db"))Performance Optimization
Large Dataset Optimization
Symptom: Tests with many rows run slowly.
Solutions:
| Optimization | Impact | How |
|---|---|---|
Use RowOrdering.ORDERED | Fastest comparison (O(n)) | Set in @ExpectedDataSet |
Use TRUNCATE_INSERT | Faster than CLEAN_INSERT | Set in @DataSet |
Create load-order.txt | Skips metadata discovery | Add file to dataset directory |
| Reduce dataset size | Fewer rows to process | Use [Scenario] filtering |
RowOrdering Performance
RowOrdering.UNORDERED performs O(n*m) comparison in the worst case. Use ORDERED when row order is predictable.
See Database Operations for operation details.
Connection Pool Configuration
Symptom: Connection timeout or pool exhaustion.
Note: Connection pooling is external to DB Tester. Configure your connection pool (HikariCP, c3p0, and others) accordingly.
Recommendations:
- Set an appropriate
maximumPoolSizefor parallel test execution - Configure
connectionTimeoutfor slow database connections - Use
DB_CLOSE_DELAY=-1for H2 in-memory databases
Memory Management
Symptom: OutOfMemoryError with large datasets.
Solutions:
- Split large CSVs into smaller files per scenario
- Use the
[Scenario]column to load only relevant rows - Increase JVM heap size for tests:
-Xmx512m
Common Mistakes
Classpath Placement Error
Mistake: Placing dataset files outside src/test/resources.
Correct Structure:
src/test/resources/
└── com/example/UserRepositoryTest/
├── USERS.csv
└── expected/
└── USERS.csvScenario Column Name Mismatch
Mistake: Using a scenario marker different from the configured value.
Default: [Scenario] column
Custom Configuration:
Configuration.builder()
.conventions(ConventionSettings.builder()
.scenarioMarker("[TestCase]") // Custom marker
.build())
.build();Extension Mismatch
Mistake: Using unsupported file extensions with a concrete DataFormat.
With DataFormat.AUTO (the default), the framework accepts all supported extensions (.csv, .tsv, .json, .yaml). When a concrete format is configured, only files with the matching extension load.
Solution: Use DataFormat.AUTO (default) to load all supported formats, or configure the matching format:
ConventionSettings.builder()
.dataFormat(DataFormat.TSV)
.build();Alternatively, rename files to match the configured format extension.
Expectation Suffix Mismatch
Mistake: Expected files not in the expected/ subdirectory.
Default: expected/ suffix for expectation datasets.
Custom Configuration:
ConventionSettings.builder()
.expectationSuffix("verify/") // Custom suffix
.build();See Configuration for all settings.
Table Name Case Sensitivity
Mistake: CSV filename case does not match the table name.
Example:
- Table created as
USERS(H2 uppercase) - CSV named
users.csv(lowercase)
Solution: Match the exact case of your database table name. H2 converts unquoted identifiers to uppercase.
Foreign Key Order
Mistake: Inserting child records before parent records.
Solution: Create load-order.txt in your dataset directory:
PARENT
CHILD
GRANDCHILDThen configure the table ordering strategy:
@DataSet(tableOrdering = TableOrderingStrategy.LOAD_ORDER_FILE)Debugging Workflow
Step 1: Enable DEBUG Logging
# application.properties or logback.xml
logging.level.io.github.seijikohara.dbtester=DEBUGStep 2: Check Dataset Loading
DEBUG output shows:
- Which files the framework loads
- Table order determination
- Row filtering by scenario
Step 3: Verify Database State
Query the database directly after @DataSet preparation:
@Test
@DataSet
void debugTest() throws SQLException {
try (var conn = dataSource.getConnection();
var stmt = conn.createStatement();
var rs = stmt.executeQuery("SELECT * FROM USERS")) {
while (rs.next()) {
System.out.println(rs.getString("NAME"));
}
}
}Step 4: Compare Expected vs Actual
If validation fails, the YAML output shows exact differences. Use this output to identify whether the issue is in:
- Expected data (CSV)
- Test logic
- Database state
Related Documentation
- Error Handling -- Exception specifications
- Configuration -- Framework settings
- Data Formats -- Data format structure
- Database Operations -- Operation types
- Public API -- Annotation reference