During my attempts to get SFTP working with TestContainers, I´ve run into a permission issue I couldn´t solve, and I thought I could maybe solve it by using TestContainers together with Docker Compose, where I came up with a streaming based file copy / edit solution I like to share.
My integration tests are creating a temp dir with Junit 5´s @TempDir annotation, and I wanted to copy a docker-compose template from the Java projects src/test/resources folder to the destination directory and modify it on the fly by replacing some parameters.
This is how I did it
First of all, our docker-compose.yml contains a few parameters. I´ve decided to go with this notion:
version: '3.1'
services:
sftp:
image: atmoz/sftp
volumes:
- <host-dir>:/home/<user>/upload
ports:
- "22:22"
command: <user>:<password>:1777
We are using Files.readAllLines to read our resources file streaming-based. This means not everything is loaded into the memory, but we can process line by line by accessing the stream and iterating over it.
I´m using the .map function to replace all those placeholders, if contained in the current line, with the real data, such as the temp dir path.
try (PrintWriter pw = new PrintWriter(Paths.get("docker-compose.yml").toFile(), StandardCharsets.UTF_8)) {
Files.readAllLines(Path.of("src/test/resources/docker-compose-template.yml"), StandardCharsets.UTF_8)
.stream()
.map(s -> s.replaceAll("<host-dir>", sftpHomeDirectory.getAbsolutePath())
.replaceAll("<user>", USER)
.replaceAll("<password>", PASSWORD)
.replaceAll("<port>", String.valueOf(PORT)))
.forEachOrdered(pw::println);
}
(On Windows, there´s a problem with using replaceAll to insert a path. It trims out all slashes for whatever reason)
Then I´m using the PrintWriter, that allows us to write to a file line by line and I´m iterating over the stream to print-write each processed line directly to the file in the right order with forEachOrdered.
This also works for files that are stored in a directory on the file system, but you cannot read and write into the same file and it should not matter how large the files are, because as we are processing line by line we never load everything into the memory!