일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- Elasticsearch
- 기록으로 실력을 쌓자
- kotlin coroutine
- 오블완
- Linux
- CKA
- PETERICA
- Kubernetes
- mysql 튜닝
- aws
- 티스토리챌린지
- AI
- Pinpoint
- MySQL
- IntelliJ
- 정보처리기사 실기 기출문제
- CKA 기출문제
- kotlin spring
- Spring
- Java
- CloudWatch
- 코틀린 코루틴의 정석
- APM
- kotlin
- 정보처리기사 실기
- AWS EKS
- kotlin querydsl
- 정보처리기사실기 기출문제
- 공부
- minikube
- Today
- Total
피터의 개발이야기
[Spring]Testcontainers란 무엇입니까? 본문
ㅁ 들어가며
지인과 대화 중 TDD의 어려움에 대해서 질문을 드렸다. 테스트 케이스 중에 제일 어려운게 Insert문이다. H2DB를 이용해 메모리상에 기능을 구현할 수 있는 방법도 있지만 한계가 있었다. Testcontainer는 이런 한계를 넘어 운영과도 똑같은 테스트환경에서 테스트를 가능하게 된다. 더욱이 코드상으로 구현하면 알아서 환경을 만들어 준다. 도저히 잠이 안와서 빠르게 테스트 환경만 구성해 보려고 한다. 현재 시간 00시 30분이다.
ㅁ 통합 테스트 환경 유지의 어려움
[DevOps] Kube환경 Node, Redis, RDS 성능 업그레이드 작업 정리, 이 글은 검수기 테스트 환경을 구축하기 위해 AWS 환경을 증설하는 과정을 정리하였다. 통합 테스트 환경을 유지하는 데는 비용적 어려움이 있기 때문에 성능을 다운시켰다가 사전에 프로비저닝 시켜 통합 테스트를 진행해야한다.
ㅁ 인메모리 서비스의 한계
인메모리 서비스에는 프로덕션 서비스의 모든 기능이 없을 수도 있다. 예를 들어, 애플리케이션에서 Postgres/Oracle 데이터베이스의 고급 기능을 사용하고 있을 수 있다. 그러나 H2는 통합 테스트에 사용하기 위해 이러한 기능을 모두 지원하지 않을 수 있다.
인메모리 서비스는 피드백 주기를 지연시킨다. 다시 말해 개발을 위한 테스트인데, 테스트를 위한 테스트가 되어버리는 경우가 있다. 예를 들어, SQL 쿼리를 작성하고 제대로 작동하는 H2 인메모리 데이터베이스로 테스트했을 수 있다. 그러나 애플리케이션을 배포한 후에 쿼리 구문이 H2에서는 제대로 작동하지만 프로덕션 데이터베이스 Postgres/Oracle에서는 작동하지 않을 수 있다. 아니면 이 문제를 완화하기 위해 다른 방법으로 코딩할까? 이러한 종류의 고민? 테스트는 개발의 완성을 지연시키고 더욱이 테스트의 목적을 망각하게 한다.
ㅁ Testcontainers란?
Testcontainers는 Docker 컨테이너에 래핑된 실제 서비스를 사용하여 로컬 개발 및 테스트 종속성을 부트스트랩하기 위한 쉽고 가벼운 API를 제공하는 라이브러리이다. Testcontainers를 사용하면 모의 서비스나 메모리 내 서비스 없이 프로덕션에서 사용하는 것과 동일한 서비스에 의존하는 테스트를 작성할 수 있다. 테스트 코드를 만들면 알아서 컨테이너로 테스트 환경을 만들고 정리도 해 준다.
ㅁ Testcontainers for java
# 소스다운로드
$ git clone https://github.com/testcontainers/testcontainers-java.git
ㅇ 1.23 오전 3:36이다. 에러를 해결하기 위해 여러 방법을 써봤지만, example을 실행해 보는 것은 실패하였다.
ㅇ 1.23 오전 9:22, 인텔리제이 업그레이드 후 해결됨.
ㅇ 샘플 소스목록은 아래와 같다.
ㅁ Testcontainers examples
ㅇ Testcontainers가 제공하는 다양한 사용 사례의 예는 아래에서 확인할 수 있습니다.
- Hazelcast
- Kafka Cluster with multiple brokers
- Linked containers
- Neo4j
- Redis
- Selenium
- Selenium Module with Cucumber
- Singleton Container Pattern
- Solr
- Spring Boot
- Spring Boot with Kotlin
- TestNG
- ImmuDb
- Zookeeper
- NATS
- SFTP
ㅁ testKafkaContainerCluster
ㅇ Kafka Container Test를 진행하였다.
ㅇ 아래는 나의 첫 테스트 로그이다.
> Task :testcontainers-java:buildSrc:compileJava NO-SOURCE
> Task :testcontainers-java:buildSrc:compileGroovy UP-TO-DATE
> Task :testcontainers-java:buildSrc:pluginDescriptors UP-TO-DATE
> Task :testcontainers-java:buildSrc:processResources NO-SOURCE
> Task :testcontainers-java:buildSrc:classes UP-TO-DATE
> Task :testcontainers-java:buildSrc:jar UP-TO-DATE
> Task :kafka-cluster:compileJava NO-SOURCE
> Task :kafka-cluster:processResources NO-SOURCE
> Task :kafka-cluster:classes UP-TO-DATE
> Task :kafka-cluster:processTestResources UP-TO-DATE
> Task :testcontainers-java:testcontainers:compileJava UP-TO-DATE
> Task :testcontainers-java:kafka:compileJava UP-TO-DATE
> Task :testcontainers-java:testcontainers:processResources UP-TO-DATE
> Task :testcontainers-java:testcontainers:classes UP-TO-DATE
> Task :testcontainers-java:testcontainers:jar UP-TO-DATE
> Task :kafka-cluster:compileTestJava UP-TO-DATE
> Task :kafka-cluster:testClasses UP-TO-DATE
> Task :testcontainers-java:kafka:processResources NO-SOURCE
> Task :testcontainers-java:kafka:classes UP-TO-DATE
> Task :testcontainers-java:kafka:jar UP-TO-DATE
10:04:24.838 INFO org.testcontainers.images.PullPolicy - Image pull policy will be performed by: DefaultPullPolicy()
10:04:24.853 INFO org.testcontainers.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor')
10:04:25.781 INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with local Unix socket (unix:///var/run/docker.sock)
10:04:25.789 INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost
10:04:25.807 INFO org.testcontainers.DockerClientFactory - Connected to docker:
Server Version: 24.0.7
API Version: 1.43
Operating System: Docker Desktop
Total Memory: 7848 MB
10:04:25.846 INFO tc.testcontainers/ryuk:0.6.0 - Pulling docker image: testcontainers/ryuk:0.6.0. Please be patient; this may take some time but only needs to be done once.
10:04:28.939 INFO tc.testcontainers/ryuk:0.6.0 - Starting to pull image
10:04:28.955 INFO tc.testcontainers/ryuk:0.6.0 - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes)
10:04:30.932 INFO tc.testcontainers/ryuk:0.6.0 - Pulling image layers: 2 pending, 1 downloaded, 0 extracted, (2 MB/? MB)
10:04:30.935 INFO tc.testcontainers/ryuk:0.6.0 - Pulling image layers: 1 pending, 2 downloaded, 0 extracted, (2 MB/? MB)
10:04:31.073 INFO tc.testcontainers/ryuk:0.6.0 - Pulling image layers: 1 pending, 2 downloaded, 1 extracted, (2 MB/? MB)
10:04:31.156 INFO tc.testcontainers/ryuk:0.6.0 - Pulling image layers: 1 pending, 2 downloaded, 2 extracted, (3 MB/? MB)
10:04:31.168 INFO tc.testcontainers/ryuk:0.6.0 - Pull complete. 3 layers, pulled in 2s (downloaded 3 MB at 1 MB/s)
10:04:31.168 INFO tc.testcontainers/ryuk:0.6.0 - Image testcontainers/ryuk:0.6.0 pull took PT5.321635S
10:04:31.185 INFO tc.testcontainers/ryuk:0.6.0 - Creating container for image: testcontainers/ryuk:0.6.0
10:04:31.278 INFO tc.testcontainers/ryuk:0.6.0 - Container testcontainers/ryuk:0.6.0 is starting: d16ca5528b6506ffd2a7e1611e16730e50365e9d18e7871e7005f930cb9fdf32
10:04:31.571 INFO tc.testcontainers/ryuk:0.6.0 - Container testcontainers/ryuk:0.6.0 started in PT0.385687S
10:04:31.583 INFO org.testcontainers.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
10:04:31.583 INFO org.testcontainers.DockerClientFactory - Checking the system...
10:04:31.583 INFO org.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0
10:04:31.588 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling docker image: confluentinc/cp-zookeeper:6.2.1. Please be patient; this may take some time but only needs to be done once.
10:04:34.294 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Starting to pull image
10:04:34.294 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes)
10:04:35.025 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 10 pending, 1 downloaded, 0 extracted, (520 KB/? MB)
10:04:36.637 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 9 pending, 2 downloaded, 0 extracted, (26 MB/? MB)
10:04:37.405 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 8 pending, 3 downloaded, 0 extracted, (40 MB/? MB)
10:04:40.199 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 7 pending, 4 downloaded, 0 extracted, (91 MB/? MB)
10:04:40.503 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 6 pending, 5 downloaded, 0 extracted, (95 MB/? MB)
10:04:40.970 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 5 pending, 6 downloaded, 0 extracted, (96 MB/? MB)
10:04:41.220 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 4 pending, 7 downloaded, 0 extracted, (97 MB/? MB)
10:04:41.698 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 3 pending, 8 downloaded, 0 extracted, (99 MB/? MB)
10:04:42.543 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 3 pending, 8 downloaded, 1 extracted, (112 MB/? MB)
10:04:42.568 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 3 pending, 8 downloaded, 2 extracted, (112 MB/? MB)
10:04:42.756 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 2 pending, 9 downloaded, 2 extracted, (117 MB/? MB)
10:04:48.517 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 1 pending, 10 downloaded, 2 extracted, (220 MB/? MB)
10:04:58.204 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 2 extracted, (368 MB/370 MB)
10:05:08.128 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 3 extracted, (368 MB/370 MB)
10:05:08.249 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 4 extracted, (368 MB/370 MB)
10:05:08.282 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 5 extracted, (368 MB/370 MB)
10:05:08.675 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 6 extracted, (370 MB/370 MB)
10:05:08.693 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 7 extracted, (370 MB/370 MB)
10:05:08.707 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 8 extracted, (370 MB/370 MB)
10:05:08.721 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 9 extracted, (370 MB/370 MB)
10:05:09.780 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 10 extracted, (370 MB/370 MB)
10:05:09.796 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pulling image layers: 0 pending, 11 downloaded, 11 extracted, (370 MB/370 MB)
10:05:09.809 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Image confluentinc/cp-zookeeper:6.2.1 pull took PT38.220581S
10:05:09.809 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Pull complete. 11 layers, pulled in 35s (downloaded 370 MB at 10 MB/s)
10:05:09.816 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Creating container for image: confluentinc/cp-zookeeper:6.2.1
10:05:10.534 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Container confluentinc/cp-zookeeper:6.2.1 is starting: ef7e44db79490c16d3cdbf707ba0de5fff527df517d8f7aace55f50f43755144
10:05:10.725 INFO tc.confluentinc/cp-zookeeper:6.2.1 - Container confluentinc/cp-zookeeper:6.2.1 started in PT0.909266S
10:05:10.730 INFO tc.confluentinc/cp-kafka:6.2.1 - Pulling docker image: confluentinc/cp-kafka:6.2.1. Please be patient; this may take some time but only needs to be done once.
10:05:13.439 INFO tc.confluentinc/cp-kafka:6.2.1 - Starting to pull image
10:05:13.440 INFO tc.confluentinc/cp-kafka:6.2.1 - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes)
10:05:14.157 INFO tc.confluentinc/cp-kafka:6.2.1 - Pulling image layers: 10 pending, 1 downloaded, 0 extracted, (759 bytes/? MB)
10:05:19.604 INFO tc.confluentinc/cp-kafka:6.2.1 - Pulling image layers: 9 pending, 2 downloaded, 0 extracted, (69 MB/? MB)
10:05:20.565 INFO tc.confluentinc/cp-kafka:6.2.1 - Pulling image layers: 9 pending, 2 downloaded, 1 extracted, (69 MB/? MB)
10:05:20.581 INFO tc.confluentinc/cp-kafka:6.2.1 - Pulling image layers: 9 pending, 2 downloaded, 2 extracted, (69 MB/? MB)
10:05:20.591 INFO tc.confluentinc/cp-kafka:6.2.1 - Image confluentinc/cp-kafka:6.2.1 pull took PT9.861338S
10:05:20.591 INFO tc.confluentinc/cp-kafka:6.2.1 - Pull complete. 11 layers, pulled in 7s (downloaded 69 MB at 9 MB/s)
10:05:20.596 INFO tc.confluentinc/cp-kafka:6.2.1 - Creating container for image: confluentinc/cp-kafka:6.2.1
10:05:20.704 INFO tc.confluentinc/cp-kafka:6.2.1 - Container confluentinc/cp-kafka:6.2.1 is starting: d50840728ba4e73c17d96b8e4e0af7bca20e31e5b06c15ff4970f9479b407624
10:05:25.752 INFO tc.confluentinc/cp-kafka:6.2.1 - Container confluentinc/cp-kafka:6.2.1 started in PT5.155696S
10:05:25.752 INFO tc.confluentinc/cp-kafka:6.2.1 - Creating container for image: confluentinc/cp-kafka:6.2.1
10:05:25.808 INFO tc.confluentinc/cp-kafka:6.2.1 - Container confluentinc/cp-kafka:6.2.1 is starting: aeff47d60ac199c5b55795c9887cc5757adfd710ab0528490aba53a2bd7b9a89
10:05:30.909 INFO tc.confluentinc/cp-kafka:6.2.1 - Container confluentinc/cp-kafka:6.2.1 started in PT5.156559S
10:05:30.910 INFO tc.confluentinc/cp-kafka:6.2.1 - Creating container for image: confluentinc/cp-kafka:6.2.1
10:05:30.953 INFO tc.confluentinc/cp-kafka:6.2.1 - Container confluentinc/cp-kafka:6.2.1 is starting: 4bf5925c515e24ee1dfde8112696ad65a0b1ba8c0313769d3652faa90557b0f4
10:05:36.668 INFO tc.confluentinc/cp-kafka:6.2.1 - Container confluentinc/cp-kafka:6.2.1 started in PT5.758046S
10:05:38.009 INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
auto.include.jmx.reporter = true
bootstrap.servers = [PLAINTEXT://localhost:54854, PLAINTEXT://localhost:54855, PLAINTEXT://localhost:54856]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
10:05:38.155 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.6.1
10:05:38.156 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 5e3c2b738d253ff5
10:05:38.156 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1705971938154
10:05:38.174 INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = -1
auto.include.jmx.reporter = true
batch.size = 16384
bootstrap.servers = [PLAINTEXT://localhost:54854, PLAINTEXT://localhost:54855, PLAINTEXT://localhost:54856]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = c12bb71b-9959-46d1-b1bf-72b814f90e02
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.adaptive.partitioning.enable = true
partitioner.availability.timeout.ms = 0
partitioner.class = null
partitioner.ignore.keys = false
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
10:05:38.192 INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=c12bb71b-9959-46d1-b1bf-72b814f90e02] Instantiated an idempotent producer.
10:05:38.213 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.6.1
10:05:38.213 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 5e3c2b738d253ff5
10:05:38.213 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1705971938213
10:05:38.228 INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.include.jmx.reporter = true
auto.offset.reset = earliest
bootstrap.servers = [PLAINTEXT://localhost:54854, PLAINTEXT://localhost:54855, PLAINTEXT://localhost:54856]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = tc-ac19dd65-c842-4549-8dba-946ea20b0876
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 45000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
10:05:38.293 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.6.1
10:05:38.293 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 5e3c2b738d253ff5
10:05:38.293 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1705971938293
10:05:38.633 INFO org.apache.kafka.clients.Metadata - [Producer clientId=c12bb71b-9959-46d1-b1bf-72b814f90e02] Cluster ID: SfodUKuHSgeoQu3RR3Wo4A
10:05:38.634 INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=c12bb71b-9959-46d1-b1bf-72b814f90e02] ProducerId set to 2000 with epoch 0
10:05:38.988 INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Subscribed to topic(s): messages
10:05:39.215 INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Cluster ID: SfodUKuHSgeoQu3RR3Wo4A
10:05:39.335 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Discovered group coordinator localhost:54856 (id: 2147483645 rack: null)
10:05:39.342 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] (Re-)joining group
10:05:39.365 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Request joining group due to: need to re-join with the given member-id: consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1-50ba7bb6-2e98-4be0-b3ef-d80624a5a90b
10:05:39.365 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException)
10:05:39.365 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] (Re-)joining group
10:05:39.388 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Successfully joined group with generation Generation{generationId=1, memberId='consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1-50ba7bb6-2e98-4be0-b3ef-d80624a5a90b', protocol='range'}
10:05:39.396 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Finished assignment for group at generation 1: {consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1-50ba7bb6-2e98-4be0-b3ef-d80624a5a90b=Assignment(partitions=[messages-0, messages-1, messages-2])}
10:05:39.456 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Successfully synced group in generation Generation{generationId=1, memberId='consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1-50ba7bb6-2e98-4be0-b3ef-d80624a5a90b', protocol='range'}
10:05:39.456 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Notifying assignor about the new Assignment(partitions=[messages-0, messages-1, messages-2])
10:05:39.459 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Adding newly assigned partitions: messages-0, messages-1, messages-2
10:05:39.472 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Found no committed offset for partition messages-0
10:05:39.473 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Found no committed offset for partition messages-1
10:05:39.473 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Found no committed offset for partition messages-2
10:05:39.494 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Resetting offset for partition messages-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:54855 (id: 1 rack: null)], epoch=0}}.
10:05:39.500 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Resetting offset for partition messages-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:54854 (id: 0 rack: null)], epoch=0}}.
10:05:39.500 INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Resetting offset for partition messages-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:54856 (id: 2 rack: null)], epoch=0}}.
10:05:39.524 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Revoke previously assigned partitions messages-0, messages-1, messages-2
10:05:39.525 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Member consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1-50ba7bb6-2e98-4be0-b3ef-d80624a5a90b sending LeaveGroup request to coordinator localhost:54856 (id: 2147483645 rack: null) due to the consumer unsubscribed from all topics
10:05:39.526 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Resetting generation and member id due to: consumer pro-actively leaving the group
10:05:39.526 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Request joining group due to: consumer pro-actively leaving the group
10:05:39.526 INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Unsubscribed all topics or patterns and assigned partitions
10:05:39.528 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Resetting generation and member id due to: consumer pro-actively leaving the group
10:05:39.528 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Request joining group due to: consumer pro-actively leaving the group
10:05:40.012 INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Node 0 sent an invalid full fetch response with extraPartitions=(messages-1), response=(messages-1)
10:05:40.027 INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1, groupId=tc-ac19dd65-c842-4549-8dba-946ea20b0876] Node 1 sent an invalid full fetch response with extraPartitions=(messages-0), response=(messages-0)
10:05:40.029 INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed
10:05:40.029 INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter
10:05:40.029 INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed
10:05:40.035 INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.consumer for consumer-tc-ac19dd65-c842-4549-8dba-946ea20b0876-1 unregistered
10:05:40.035 INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=c12bb71b-9959-46d1-b1bf-72b814f90e02] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
10:05:40.039 INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed
10:05:40.039 INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter
10:05:40.040 INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed
10:05:40.040 INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.producer for c12bb71b-9959-46d1-b1bf-72b814f90e02 unregistered
10:05:40.040 INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for adminclient-1 unregistered
10:05:40.042 INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed
10:05:40.043 INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter
10:05:40.043 INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed
> Task :kafka-cluster:test
Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
For more on this, please refer to https://docs.gradle.org/8.5/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.
BUILD SUCCESSFUL in 1m 19s
11 actionable tasks: 1 executed, 10 up-to-date
10:05:40 AM: Execution finished ':kafka-cluster:test --tests "com.example.kafkacluster.KafkaContainerClusterTest.testKafkaContainerCluster"'.
'Programming > Spring' 카테고리의 다른 글
REST API에서 PATCH와 PUT의 차이점 (0) | 2024.06.16 |
---|---|
[Spring] JDBC와 MyBatis와 JPA 비교, 시대적 흐름에서 장단점 분석 (2) | 2024.06.11 |
[Spring] Hateoas(헤이티오스)란 (0) | 2023.08.08 |
[Spring Gateway] 308 영구 리다이렉션 병목현상 해결 (0) | 2023.08.01 |
@NotNull @NotEmpty @NotBlank (0) | 2023.06.22 |