🌅 Saturday Setup: Database Development Environment - Optimizing Your Data Stack

Los sábados optimizamos nuestros espacios de trabajo. Hoy enfoque en el setup de base de datos para desarrollo: cómo configurar un entorno de datos robusto, escalable y que acelere tu workflow diario sin comprometer performance.

:file_cabinet: El Problema de Datos en Desarrollo

La mayoría de developers configuran bases de datos como afterthought, usando instalaciones locales básicas que se vuelven lentas, inconsistentes, y difíciles de mantener. Un setup inteligente de datos puede transformar tu velocidad de desarrollo y debugging.

Síntomas de un Setup Subóptimo:

  • Queries lentas en desarrollo que funcionan rápido en producción
  • Datos inconsistentes entre diferentes developers del team
  • Setup complejo para nuevos team members
  • Backup/restore manual y propenso a errores
  • Testing difícil con data realista

:rocket: Stack de Database Development Moderno

Core Infrastructure

┌─────────────────────────────────────────┐
│           DEVELOPMENT STACK             │
├─────────────────────────────────────────┤
│  Docker Compose                         │
│  ├── PostgreSQL (Primary)               │
│  ├── Redis (Cache/Sessions)             │  
│  ├── ElasticSearch (Search)             │
│  └── MongoDB (Document Store)           │
├─────────────────────────────────────────┤
│  Management Tools                       │
│  ├── pgAdmin (PostgreSQL GUI)           │
│  ├── Redis Commander (Redis GUI)        │
│  └── Elasticsearch Head (ES GUI)        │
├─────────────────────────────────────────┤
│  Development Utilities                  │
│  ├── Database Seeder                    │
│  ├── Migration Runner                   │
│  ├── Query Profiler                     │
│  └── Backup/Restore Scripts             │
└─────────────────────────────────────────┘

:spouting_whale: Docker Compose Configuration Inteligente

Setup Base Multi-Database:

# docker-compose.db.yml
version: '3.8'

services:
  # PostgreSQL - Base de datos principal
  postgres:
    image: postgres:15-alpine
    container_name: dev_postgres
    environment:
      POSTGRES_DB: ${POSTGRES_DB:-devdb}
      POSTGRES_USER: ${POSTGRES_USER:-devuser}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-devpass}
      # Optimizaciones para desarrollo
      POSTGRES_INITDB_ARGS: "--data-checksums --wal-buffers=16MB"
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./db/init:/docker-entrypoint-initdb.d
      - ./db/backups:/backups
      # Configuración optimizada para desarrollo
      - ./db/postgresql.conf:/etc/postgresql/postgresql.conf
    command: >
      postgres 
      -c shared_buffers=256MB
      -c effective_cache_size=1GB
      -c maintenance_work_mem=64MB
      -c checkpoint_completion_target=0.9
      -c wal_buffers=16MB
      -c default_statistics_target=100
      -c random_page_cost=1.1
      -c effective_io_concurrency=200
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-devuser}"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Redis - Cache y sessions
  redis:
    image: redis:7-alpine
    container_name: dev_redis
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data
      - ./db/redis.conf:/etc/redis/redis.conf
    command: redis-server /etc/redis/redis.conf
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 3

  # MongoDB - Document store
  mongodb:
    image: mongo:7
    container_name: dev_mongo
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER:-mongouser}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD:-mongopass}
    ports:
      - "27017:27017"
    volumes:
      - mongodb_data:/data/db
      - ./db/mongo-init:/docker-entrypoint-initdb.d
    command: mongod --auth --bind_ip_all

  # ElasticSearch - Search engine
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
    container_name: dev_elasticsearch
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - xpack.security.enabled=false
    ports:
      - "9200:9200"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9200/_health"]
      interval: 30s
      timeout: 10s
      retries: 5

  # pgAdmin - PostgreSQL management
  pgadmin:
    image: dpage/pgadmin4:latest
    container_name: dev_pgadmin
    environment:
      PGADMIN_DEFAULT_EMAIL: admin@dev.local
      PGADMIN_DEFAULT_PASSWORD: admin
      PGADMIN_CONFIG_SERVER_MODE: 'False'
      PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED: 'False'
    ports:
      - "5050:80"
    volumes:
      - pgadmin_data:/var/lib/pgadmin
      - ./db/pgadmin-servers.json:/pgadmin4/servers.json
    depends_on:
      - postgres

  # Redis Commander - Redis GUI
  redis-commander:
    image: rediscommander/redis-commander:latest
    container_name: dev_redis_commander
    environment:
      REDIS_HOSTS: local:redis:6379
    ports:
      - "8081:8081"
    depends_on:
      - redis

volumes:
  postgres_data:
    driver: local
  redis_data:
    driver: local
  mongodb_data:
    driver: local
  elasticsearch_data:
    driver: local
  pgladmin_data:
    driver: local

:high_voltage: Configuraciones Optimizadas por Database

PostgreSQL Development Config:

# db/postgresql.conf - Optimizado para desarrollo
# Memory Settings
shared_buffers = 256MB                # 25% of RAM for development
effective_cache_size = 1GB            # Estimate of OS cache
maintenance_work_mem = 64MB           # For VACUUM, CREATE INDEX
work_mem = 4MB                        # Per-operation sort memory

# WAL Settings
wal_buffers = 16MB                    # WAL buffer size
checkpoint_completion_target = 0.9    # Smooth checkpoints
max_wal_size = 1GB                    # Maximum WAL size
min_wal_size = 80MB                   # Minimum WAL size

# Query Planner
random_page_cost = 1.1                # SSD-optimized
effective_io_concurrency = 200        # Concurrent I/O operations
default_statistics_target = 100       # Statistics detail level

# Development Features
log_statement = 'all'                 # Log all statements
log_duration = on                     # Log query duration
log_min_duration_statement = 100      # Log slow queries (100ms+)
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '

# Extensions
shared_preload_libraries = 'pg_stat_statements'

Redis Development Config:

# db/redis.conf - Desarrollo optimizado
# Memory Management
maxmemory 256mb
maxmemory-policy allkeys-lru

# Persistence (ligera para desarrollo)
save 900 1
save 300 10
save 60 10000

# Logging
loglevel notice
logfile /var/log/redis/redis-server.log

# Network
timeout 300
tcp-keepalive 60

# Development Settings
databases 16
# Enable keyspace notifications para debugging
notify-keyspace-events AKE

:bullseye: Scripts de Automatización de Database

Database Setup Automático:

#!/bin/bash
# db/setup-databases.sh

set -e

echo "🗄️  Setting up development databases..."

# Crear directorio de backups
mkdir -p ./db/backups

# Crear archivo .env si no existe
if [ ! -f .env.db ]; then
    cat > .env.db << EOF
POSTGRES_DB=devdb
POSTGRES_USER=devuser
POSTGRES_PASSWORD=devpass123
MONGO_USER=mongouser
MONGO_PASSWORD=mongopass123
EOF
fi

# Iniciar servicios de database
echo "🚀 Starting database services..."
docker-compose -f docker-compose.db.yml --env-file .env.db up -d

# Esperar a que PostgreSQL esté listo
echo "⏳ Waiting for PostgreSQL..."
until docker-compose -f docker-compose.db.yml exec postgres pg_isready -U devuser > /dev/null 2>&1; do
    sleep 1
done

# Ejecutar scripts de inicialización
echo "🔧 Running initialization scripts..."
docker-compose -f docker-compose.db.yml exec postgres psql -U devuser -d devdb -f /docker-entrypoint-initdb.d/001-extensions.sql
docker-compose -f docker-compose.db.yml exec postgres psql -U devuser -d devdb -f /docker-entrypoint-initdb.d/002-schemas.sql

# Seeding con datos de prueba
echo "🌱 Seeding development data..."
./db/seed-data.sh

echo "✅ Database setup completed!"
echo "📊 Services available:"
echo "  - PostgreSQL: localhost:5432"
echo "  - Redis: localhost:6379"
echo "  - MongoDB: localhost:27017"
echo "  - Elasticsearch: localhost:9200"
echo "  - pgAdmin: http://localhost:5050"
echo "  - Redis Commander: http://localhost:8081"

Data Seeding Inteligente:

#!/bin/bash
# db/seed-data.sh

echo "🌱 Seeding development data..."

# PostgreSQL seeding
docker-compose -f docker-compose.db.yml exec postgres psql -U devuser -d devdb << EOF
-- Crear tablas de ejemplo
CREATE TABLE IF NOT EXISTS users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255) UNIQUE NOT NULL,
    name VARCHAR(255) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE IF NOT EXISTS posts (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id),
    title VARCHAR(255) NOT NULL,
    content TEXT,
    published BOOLEAN DEFAULT false,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Insertar datos de prueba
INSERT INTO users (email, name) VALUES
    ('john@example.com', 'John Doe'),
    ('jane@example.com', 'Jane Smith'),
    ('bob@example.com', 'Bob Johnson')
ON CONFLICT (email) DO NOTHING;

INSERT INTO posts (user_id, title, content, published) VALUES
    (1, 'First Post', 'This is the first post content', true),
    (1, 'Second Post', 'This is the second post content', false),
    (2, 'Jane''s Post', 'Content by Jane', true),
    (3, 'Bob''s Draft', 'Bob''s draft content', false)
ON CONFLICT DO NOTHING;

-- Configurar estadísticas
ANALYZE;
EOF

# Redis seeding
docker-compose -f docker-compose.db.yml exec redis redis-cli << EOF
SET user:1:session "active"
SET user:2:session "active"  
SET user:3:session "inactive"
HSET user:1:profile name "John Doe" email "john@example.com"
HSET user:2:profile name "Jane Smith" email "jane@example.com"
EXPIRE user:1:session 3600
EXPIRE user:2:session 3600
EOF

# MongoDB seeding
docker-compose -f docker-compose.db.yml exec mongodb mongosh admin -u mongouser -p mongopass123 << EOF
use devdb;

db.products.insertMany([
    {
        name: "Laptop Pro",
        price: 1299,
        category: "electronics",
        tags: ["computer", "laptop", "professional"],
        created_at: new Date()
    },
    {
        name: "Coffee Maker",
        price: 89,
        category: "appliances", 
        tags: ["coffee", "kitchen", "morning"],
        created_at: new Date()
    },
    {
        name: "Wireless Headphones",
        price: 199,
        category: "electronics",
        tags: ["audio", "wireless", "music"],
        created_at: new Date()
    }
]);

db.products.createIndex({ name: "text", tags: "text" });
db.products.createIndex({ category: 1, price: 1 });
EOF

echo "✅ Data seeding completed!"

:bar_chart: Database Performance Monitoring

Query Performance Analyzer:

-- db/init/001-extensions.sql
-- Instalar extensiones útiles para desarrollo

-- Query performance tracking
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Better explain plans
CREATE EXTENSION IF NOT EXISTS auto_explain;

-- UUID generation
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

-- Full text search
CREATE EXTENSION IF NOT EXISTS pg_trgm;

-- JSON operations
CREATE EXTENSION IF NOT EXISTS btree_gin;

-- Configurar auto_explain para queries lentas
ALTER SYSTEM SET auto_explain.log_min_duration = 100;
ALTER SYSTEM SET auto_explain.log_analyze = true;
ALTER SYSTEM SET auto_explain.log_buffers = true;
SELECT pg_reload_conf();

Performance Monitoring Script:

#!/bin/bash
# db/monitor-performance.sh

echo "📊 Database Performance Report"
echo "================================"

# PostgreSQL performance stats
docker-compose -f docker-compose.db.yml exec postgres psql -U devuser -d devdb << EOF
-- Top 10 slowest queries
SELECT 
    query,
    calls,
    total_time,
    mean_time,
    rows
FROM pg_stat_statements 
ORDER BY mean_time DESC 
LIMIT 10;

-- Database size
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size
FROM pg_tables 
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

-- Index usage
SELECT 
    indexrelname,
    idx_tup_read,
    idx_tup_fetch,
    idx_scan
FROM pg_stat_user_indexes 
ORDER BY idx_scan DESC;
EOF

# Redis memory usage
echo "📋 Redis Memory Usage:"
docker-compose -f docker-compose.db.yml exec redis redis-cli INFO memory | grep used_memory_human

# ElasticSearch cluster health
echo "🔍 ElasticSearch Health:"
curl -s localhost:9200/_cluster/health | jq '.'

echo "✅ Performance report completed!"

:counterclockwise_arrows_button: Backup y Restore Automation

Backup Automático:

#!/bin/bash
# db/backup-databases.sh

BACKUP_DIR="./db/backups"
DATE=$(date +%Y%m%d_%H%M%S)

echo "💾 Creating database backups..."

# PostgreSQL backup
docker-compose -f docker-compose.db.yml exec postgres pg_dump -U devuser devdb > "$BACKUP_DIR/postgres_$DATE.sql"

# MongoDB backup
docker-compose -f docker-compose.db.yml exec mongodb mongodump --username mongouser --password mongopass123 --out /backups/mongo_$DATE
docker cp dev_mongo:/backups/mongo_$DATE "$BACKUP_DIR/"

# Redis backup
docker-compose -f docker-compose.db.yml exec redis redis-cli --rdb /data/redis_$DATE.rdb
docker cp dev_redis:/data/redis_$DATE.rdb "$BACKUP_DIR/"

# Cleanup old backups (keep last 7 days)
find "$BACKUP_DIR" -name "*.sql" -mtime +7 -delete
find "$BACKUP_DIR" -name "redis_*.rdb" -mtime +7 -delete
find "$BACKUP_DIR" -type d -name "mongo_*" -mtime +7 -exec rm -rf {} +

echo "✅ Backup completed: $BACKUP_DIR"

Restore Script:

#!/bin/bash
# db/restore-databases.sh

BACKUP_DIR="./db/backups"
RESTORE_DATE=$1

if [ -z "$RESTORE_DATE" ]; then
    echo "Usage: ./restore-databases.sh YYYYMMDD_HHMMSS"
    echo "Available backups:"
    ls -la "$BACKUP_DIR" | grep -E "(postgres_|mongo_|redis_)"
    exit 1
fi

echo "🔄 Restoring databases from $RESTORE_DATE..."

# Stop applications that might be using the databases
echo "⏸️  Stopping application services..."
docker-compose down

# Restart database services
echo "🚀 Restarting database services..."
docker-compose -f docker-compose.db.yml up -d

# Wait for databases to be ready
sleep 10

# Restore PostgreSQL
if [ -f "$BACKUP_DIR/postgres_$RESTORE_DATE.sql" ]; then
    echo "📥 Restoring PostgreSQL..."
    docker-compose -f docker-compose.db.yml exec postgres dropdb -U devuser devdb
    docker-compose -f docker-compose.db.yml exec postgres createdb -U devuser devdb
    docker-compose -f docker-compose.db.yml exec -T postgres psql -U devuser devdb < "$BACKUP_DIR/postgres_$RESTORE_DATE.sql"
fi

# Restore MongoDB
if [ -d "$BACKUP_DIR/mongo_$RESTORE_DATE" ]; then
    echo "📥 Restoring MongoDB..."
    docker cp "$BACKUP_DIR/mongo_$RESTORE_DATE" dev_mongo:/restore/
    docker-compose -f docker-compose.db.yml exec mongodb mongorestore --username mongouser --password mongopass123 --drop /restore/mongo_$RESTORE_DATE/
fi

# Restore Redis
if [ -f "$BACKUP_DIR/redis_$RESTORE_DATE.rdb" ]; then
    echo "📥 Restoring Redis..."
    docker-compose -f docker-compose.db.yml stop redis
    docker cp "$BACKUP_DIR/redis_$RESTORE_DATE.rdb" dev_redis:/data/dump.rdb
    docker-compose -f docker-compose.db.yml start redis
fi

echo "✅ Database restore completed!"

:artist_palette: Database GUI Tools Integration

pgAdmin Pre-configuration:

{
  "servers": {
    "1": {
      "Name": "Development Server",
      "Group": "Servers",
      "Host": "postgres",
      "Port": 5432,
      "MaintenanceDB": "devdb",
      "Username": "devuser",
      "PassFile": "/tmp/pgpass",
      "SSLMode": "prefer"
    }
  }
}

VS Code Database Extensions Setup:

// .vscode/settings.json - Database extensions
{
  "vscode-postgres.connections": [
    {
      "label": "Development DB",
      "host": "localhost",
      "user": "devuser",
      "password": "devpass123",
      "database": "devdb",
      "port": "5432"
    }
  ],
  "redis.connections": [
    {
      "label": "Dev Redis",
      "host": "localhost",
      "port": 6379,
      "auth": ""
    }
  ]
}

:test_tube: Testing Database Setup

Test Data Generator:

// db/generate-test-data.js
const { Pool } = require('pg');
const { faker } = require('@faker-js/faker');

const pool = new Pool({
  host: 'localhost',
  port: 5432,
  database: 'devdb',
  user: 'devuser',
  password: 'devpass123',
});

async function generateTestData() {
  console.log('🎭 Generating test data...');
  
  // Generate users
  for (let i = 0; i < 1000; i++) {
    await pool.query(`
      INSERT INTO users (email, name) 
      VALUES ($1, $2)
      ON CONFLICT (email) DO NOTHING
    `, [faker.internet.email(), faker.person.fullName()]);
  }
  
  // Generate posts
  const users = await pool.query('SELECT id FROM users LIMIT 100');
  
  for (let i = 0; i < 5000; i++) {
    const randomUser = users.rows[Math.floor(Math.random() * users.rows.length)];
    await pool.query(`
      INSERT INTO posts (user_id, title, content, published) 
      VALUES ($1, $2, $3, $4)
    `, [
      randomUser.id,
      faker.lorem.sentence(),
      faker.lorem.paragraphs(),
      faker.datatype.boolean()
    ]);
  }
  
  console.log('✅ Test data generation completed!');
  await pool.end();
}

generateTestData().catch(console.error);

:light_bulb: Pro Tips para Database Development

Environment Switching:

# Aliases para diferentes entornos
alias db-start="docker-compose -f docker-compose.db.yml up -d"
alias db-stop="docker-compose -f docker-compose.db.yml down"
alias db-logs="docker-compose -f docker-compose.db.yml logs -f"
alias db-backup="./db/backup-databases.sh"
alias db-restore="./db/restore-databases.sh"
alias db-monitor="./db/monitor-performance.sh"
alias db-seed="./db/seed-data.sh"

# Database connections
alias pgdev="psql -h localhost -U devuser -d devdb"
alias redisdev="docker-compose -f docker-compose.db.yml exec redis redis-cli"
alias mongodev="docker-compose -f docker-compose.db.yml exec mongodb mongosh admin -u mongouser -p mongopass123"

Query Optimization Helpers:

-- db/helpers/query-helpers.sql
-- Funciones útiles para desarrollo

-- Query performance helper
CREATE OR REPLACE FUNCTION explain_query(query_text TEXT)
RETURNS TABLE(plan TEXT) AS $$
BEGIN
    RETURN QUERY EXECUTE 'EXPLAIN (ANALYZE, BUFFERS, VERBOSE) ' || query_text;
END;
$$ LANGUAGE plpgsql;

-- Table size helper
CREATE OR REPLACE FUNCTION table_sizes()
RETURNS TABLE(
    table_name TEXT,
    row_count BIGINT,
    total_size TEXT,
    index_size TEXT,
    table_size TEXT
) AS $$
BEGIN
    RETURN QUERY
    SELECT 
        schemaname||'.'||tablename as table_name,
        n_tup_ins - n_tup_del as row_count,
        pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as total_size,
        pg_size_pretty(pg_indexes_size(schemaname||'.'||tablename)) as index_size,
        pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) as table_size
    FROM pg_stat_user_tables 
    ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
END;
$$ LANGUAGE plpgsql;

:chart_increasing: ROI de un Database Setup Optimizado

Métricas de Productividad:

Antes del setup optimizado:

  • Tiempo de setup inicial: 2-4 horas
  • Onboarding de nuevos developers: 1-2 días
  • Query debugging: 30+ minutos por issue
  • Data inconsistency issues: 2-3 por semana

Después del setup optimizado:

  • Tiempo de setup inicial: 5 minutos (./db/setup-databases.sh)
  • Onboarding de nuevos developers: 10 minutos
  • Query debugging: 5-10 minutos (con monitoring)
  • Data inconsistency issues: 0-1 por mes

Benefits Cuantificables:

  • 85% reducción en tiempo de database setup
  • 90% menos problemas de “funciona en mi máquina”
  • 70% faster debugging de performance issues
  • 100% consistency entre desarrollo y testing

:bullseye: Implementation Roadmap

Fase 1: Setup Base (1 día)

  • Configurar Docker Compose con databases principales
  • Scripts de inicialización y seeding
  • Configuración optimizada para desarrollo

Fase 2: Tooling (1 día)

  • GUI tools integration (pgAdmin, Redis Commander)
  • VS Code extensions configuration
  • Performance monitoring scripts

Fase 3: Automation (1 día)

  • Backup/restore automation
  • Test data generation
  • Aliases y helpers para daily workflow

Fase 4: Optimization (Ongoing)

  • Query performance tuning
  • Resource usage optimization
  • Team-specific customizations

:speech_balloon: Database Development Evolution

Un database development environment bien configurado es invisible pero transformativo. Elimina friction diaria, acelera debugging, y permite focus en business logic en lugar de infrastructure.

La inversión inicial de configuración se amortiza en la primera semana, y los benefits compuestos continúan durante toda la vida del proyecto.

¿Cuál es su mayor pain point con databases en desarrollo? ¿Qué herramientas han transformado su database workflow? ¿Cómo manejan data consistency entre team members?

Compartamos nuestros setups de database development para crear un arsenal colectivo de productivity tools.

saturdaysetup databasedevelopment postgresql mongodb #Redis #Docker devops dataengineering productivity