Dockerized DB's ports work, but aren't mapped to localhost

The test’s Docker config:

docker:
  - image: cimg/go:1.19.7
  - image: yugabytedb/yugabyte:latest
    command: bin/yugabyted start --daemon=false
    name: yugabyte

During the test:
This works: dockerize -wait tcp://yugabyte:5433 -timeout 3m
This hangs: dockerize -wait tcp://127.0.0.1:5433 -timeout 3m
It fails whether or not “name: yugabyte” is specified above.

I’d like the tests to run against localhost (127.0.0.1) for convenience in running them on developer machines. What am I missing?

Other info:
Yugabyte also opens about a dozen other ports, but I only need 5433

There are limitations imposed when running docker images as they are run in a multi-tenant environment rather than a dedicated VM.

Why I can not find any docs that detail the exact configuration used, it is likely that all the images you start up will be assigned to a named network, which may show up if you use the following command

docker network ls

This will result in each docker container and your host environment being each given a dedicated IP address on a dedicated internal/private network. So tcp://127.0.0.1 will relate to the host environment where you are running the dockerize command, while tcp://yugabyte will be a different IP address assigned to the yugabyte container.

The only way around this would be to switch from a docker: based config to a machine: based config, which would allow you to instruct the yugabyte container to use the ‘host’ network.