2025-11-15 13:59:57 +01:00
2025-11-15 13:59:57 +01:00
2025-11-04 20:11:53 +01:00
2025-11-04 20:11:53 +01:00
2025-11-04 20:11:53 +01:00
2025-11-04 20:11:53 +01:00
2025-11-04 20:11:53 +01:00
2025-11-04 20:11:53 +01:00
2025-11-04 20:28:36 +01:00
2025-11-15 10:56:16 +01:00

LinkDing

LinkDing is a minimal bookmarking application where you can paste links and get a list of links with title, description, and image. After a link is pasted, the page is scraped for metadata including the main image that is then displayed in the link list.

Features

  • Paste links and get a list of links with title, description, and image
  • Automatic metadata extraction
  • Search functionality by title, description, and URL
  • Modern, responsive web interface
  • Support for JavaScript-heavy sites using Puppeteer
  • Automatic fallback from HTTP scraping to browser rendering

Tech Stack

  • Backend: Express.js (Node.js)
  • Frontend: Vanilla JavaScript, HTML5, CSS3
  • Web Scraping: Cheerio + Puppeteer (for JavaScript-heavy sites)
  • Data Storage: JSON file

Installation

Prerequisites

  • Node.js 18+ (or Docker)
  • Chromium/Chrome (for Puppeteer support, optional)

Local Installation

  1. Clone the repository or navigate to the project directory:

    cd linkding
    
  2. Install dependencies:

    npm install
    
  3. Start the server:

    npm start
    
  4. Open your browser to http://localhost:3000

Docker Installation

  1. Build the Docker image:

    docker build -t linkding .
    
  2. Run the container:

    docker run -d \
      --name linkding \
      -p 3000:3000 \
      -v $(pwd)/data:/app/data \
      linkding
    

    Or use Docker Compose:

    docker-compose up -d
    
  3. Access the application at http://localhost:3000

Usage

  1. Add a Link: Paste a URL into the input field and click "Add Link"
  2. Search: Use the search bar to filter links by title, description, or URL
  3. View Links: Browse your saved links with images, titles, and descriptions
  4. Delete Links: Click the "Delete" button on any link card to remove it

API Endpoints

  • GET /api/links - Get all saved links
  • GET /api/links/search?q=query - Search links
  • POST /api/links - Add a new link (body: { "url": "https://example.com" })
  • DELETE /api/links/:id - Delete a link by ID

Metadata Extraction

The application automatically extracts:

  • Title: From Open Graph tags, JSON-LD structured data, or HTML <h1>/<title> tags
  • Description: From meta tags, structured data, or page content
  • Images: Prioritizes product container images, then meta tags, with smart fallbacks

Image Extraction Priority

  1. Product container images (.product-container img, etc.)
  2. Product-specific image containers
  3. Open Graph / Twitter Card meta tags
  4. JSON-LD structured data
  5. Generic product selectors
  6. Fallback to meaningful images

Environment Variables

  • PORT - Server port (default: 3000)
  • CHROME_EXECUTABLE_PATH - Path to Chrome/Chromium executable (for Puppeteer)
  • NODE_ENV - Environment mode (production/development)

Data Storage

Links are stored in data/links.json. Make sure this directory exists and is writable. When using Docker, mount the data directory as a volume for persistence.

Troubleshooting

Puppeteer Issues

If you encounter issues with Puppeteer:

  1. NixOS: The app uses puppeteer-core and automatically detects system Chromium
  2. Docker: Chromium is included in the Docker image
  3. Manual Setup: Set CHROME_EXECUTABLE_PATH environment variable to your Chromium path

403 Errors

Some sites block automated requests. The app automatically:

  • First tries HTTP requests with realistic headers
  • Falls back to Puppeteer for JavaScript rendering if blocked
  • Uses system Chromium for browser automation

Development

# Install dependencies
npm install

# Run in development mode with auto-reload
npm run dev

# Start production server
npm start

License

ISC

Description
No description provided
Readme 439 KiB
Languages
JavaScript 63.5%
CSS 27%
HTML 8.5%
Dockerfile 0.7%
Makefile 0.3%