Using Multiple DataSources In Spring Batch

Multiple DataSources In Spring Batch

Here I am going to show you how to use multiple datasources in spring batch application. You may need to use multiple data sources for various reasons, for example, you may need to store table metadata for spring batch into h2 database and other business data into Oracle, MySQL or any other database.

Or another situation may occur where in production environment you do not have permission to create meta data tables for spring batch then you can use h2 database for storing meta data related information.


Java 19, Spring Boot 3.1.2, Maven 3.8.5, MySQL 8.0.31

Project Setup

You can create a maven based project in your favorite IDE or tool. You can use the following pom.xml file for your project.

In the following build file notice I have used two different database dependencies – H2 and MySQL. The H2 in-memory database will be used for storing spring batch meta data tables and MySQL database will be used for storing actual business data.

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns=""








MySQL Database Config

I am using only MySQL database configurations in src/main/resources/ file and you don’t need to configure the H2 database as it is in-memory database, so it will be available as long as the application is live.



The spring.main.allow-bean-definition-overriding=true override the implementations of spring beans when you want to override the definition or implementation of any spring bean.

MySQL Table

The following MySQL table stores the person details from CSV file which is used in the spring batch example.

USE `roytuts`;

  `id` int unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
  `email` varchar(150) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
  PRIMARY KEY (`id`)

DataSource Config

The following configuration class defines the data sources for H2 database and MySQL database. It also defines the transaction managers for both databases. Whenever you are working with multiple data sources or transaction managers you need to make one of them as primary.

public class DataSourceConfig {

	private Environment environment;

	@Bean(name = "h2DataSource")
	public DataSource h2DataSource() {
		EmbeddedDatabaseBuilder embeddedDatabaseBuilder = new EmbeddedDatabaseBuilder();
		return embeddedDatabaseBuilder.addScript("classpath:org/springframework/batch/core/schema-drop-h2.sql")

	@Bean(name = "mySQLDataSource")
	public DataSource mySQLDataSource() {
		return DataSourceBuilder.create().driverClassName(environment.getProperty("spring.datasource.driverClassName"))

	public PlatformTransactionManager mySQLDataSourceTransactionManager() {
		return new DataSourceTransactionManager(mySQLDataSource());

	public NamedParameterJdbcTemplate namedParameterJdbcTemplate() {
		return new NamedParameterJdbcTemplate(mySQLDataSource());

	@Bean(name = "platformTransactionManager")
	public PlatformTransactionManager platformTransactionManager() {
		return new DataSourceTransactionManager(h2DataSource());


FieldSet Mapper

The field set mapper is used to set the value to the appropriate object after reading from the input source.

public class UserFieldSetMapper implements FieldSetMapper<User> {

	public User mapFieldSet(FieldSet fieldSet) throws BindException {
		User user = new User();
		return user;


Item Processor

Item processor basically processes each item to be transformed into something else before being written to the destination.

public class UserItemProcessor implements ItemProcessor<User, User> {

	public User process(final User user) throws Exception {
		final String domain = "";
		final String name = user.getName().toUpperCase();
		final String email = user.getName() + "@" + domain;
		final User transformedUser = new User(name, email);
		System.out.println("Converting [" + user + "] => [" + transformedUser + "]");
		return transformedUser;


Prepared Statement

This is required to write the value of each item to the destination and here I am going to write each item to the database table.

public class PersonsPreparedStatementSetter implements ItemPreparedStatementSetter<User> {

	public void setValues(User item, PreparedStatement ps) throws SQLException {
		ps.setString(1, item.getName());
		ps.setString(2, item.getEmail());


Spring Batch Config

Spring batch configuration configures JobRepository, Job, ItemReader, ItemProcessor, ItemWriter for the spring batch processing mechanism.

You don’t need to use @EnableBatchProcessing in spring batch 5 version. The @Configuration annotation is enough for configuring the batch things.

public class SpringBatchConfig {

	private DataSource dataSource;

	private NamedParameterJdbcTemplate namedParameterJdbcTemplate;

	private PlatformTransactionManager platformTransactionManager;

	private static final String QUERY_INSERT_PERSONS = "INSERT " + "INTO persons(name, email) " + "VALUES (?, ?)";

	public JobRepository jobRepository() throws Exception {
		JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
		return factory.getObject();

	// creates an item reader
	public ItemReader<User> reader() {
		FlatFileItemReader<User> reader = new FlatFileItemReader<User>();
		// look for file user.csv
		reader.setResource(new ClassPathResource("person.csv"));
		// line mapper
		DefaultLineMapper<User> lineMapper = new DefaultLineMapper<User>();
		// each line with comma separated
		lineMapper.setLineTokenizer(new DelimitedLineTokenizer());
		// map file's field with object
		lineMapper.setFieldSetMapper(new UserFieldSetMapper());
		return reader;

	// creates an instance of our UserItemProcessor for transformation
	public ItemProcessor<User, User> processor() {
		return new UserItemProcessor();

	@Transactional(rollbackFor = Exception.class)
	// creates item writer
	public ItemWriter<User> writer() {

		JdbcBatchItemWriter<User> batchItemWriter = new JdbcBatchItemWriter<>();

		ItemPreparedStatementSetter<User> valueSetter = new PersonsPreparedStatementSetter();


		return batchItemWriter;

	public Job importUserJob(Step step) throws Exception {
		// need incrementer to maintain execution state
		return new JobBuilder("importUserJob", jobRepository()).incrementer(new RunIdIncrementer()).flow(step).end()

	public Step step1(ItemReader<User> reader, ItemWriter<User> writer, ItemProcessor<User, User> processor)
			throws Exception {
		// chunk uses how much data to write at a time
		// In this case, it writes up to five records at a time.
		// Next, we configure the reader, processor, and writer
		return new StepBuilder("step1", jobRepository()).<User, User>chunk(5, platformTransactionManager).reader(reader)


Input File

The input file (person.csv) is a CSV (Comma Separated Value) file which has simply first name and last name pairs. This file is kept under class path folder src/main/resources.


VO Class

The Value Object class is a simple class which has two attributes for first and last names.

public class User {

	private String name;
	private String email;

	public User() {

	public User(String name, String email) { = name; = email;

	public String getName() {
		return name;

	public void setName(String name) { = name;

	public String getEmail() {
		return email;

	public void setEmail(String email) { = email;

	public String toString() {
		return "name: " + name + ", email:" + email;


Spring Boot Main Class

A class is having main method and @SpringBootApplication annotation is enough to start the spring boot application.

public class SpringBatch {

	public static void main(String[] args) {, args);


Testing Multiple Data Sources In Spring Batch

Here is the output of the spring batch application when run by executing the main class.

Converting [name: soumitra, email:roy] => [name: SOUMITRA,]
Converting [name: souvik, email:sanyal] => [name: SOUVIK,]
Converting [name: arup, email:chatterjee] => [name: ARUP,]
Converting [name: suman, email:mukherjee] => [name: SUMAN,]
Converting [name: debina, email:guha] => [name: DEBINA,]
Converting [name: liton, email:sarkar] => [name: LITON,]
Converting [name: debabrata, email:poddar] => [name: DEBABRATA,]

Data in the MySQL database table gets inserted as:

spring batch multiple data sources

And the insert statements when exported be like the following:

INSERT INTO `persons` (`id`, `name`, `email`) VALUES
	(1, 'SOUMITRA', ''),
	(2, 'SOUVIK', ''),
	(3, 'ARUP', ''),
	(4, 'SUMAN', ''),
	(5, 'DEBINA', ''),
	(6, 'LITON', ''),
	(7, 'DEBABRATA', '');

Hope you got an idea how to use multiple datasources in spring batch application.

Source Code


Leave a Reply

Your email address will not be published. Required fields are marked *