Thursday, April 9, 2026

Order Birth Certificates Online | VitalChek

Order Birth Certificates Online | VitalChek

vitalchek.com

Order Birth Certificates Online | VitalChek

VitalChek – Birth Certificates, Death Certificates, Marriage Records, Divorce Records and Vital Records

How VitalChek Works

What Do You Need?

To complete your order smoothly, please have the following information ready:

Personal Details

Full name, date of birth and place of birth.

  • Identification:
    Not all orders require this, but you may need a valid government-issued ID (e.g., driver's license, passport).
  • Proof of Entitlement:
    Occasionally, we'll need documentation proving your right to access the certificate.
  • Payment Method:
    Credit or debit card.

Birth Certificates

Our vital records are certified official documents provided directly by government agencies.

Here's what you can expect on your certificate:

  • Accurate and Up-to-date Information:
    All data is verified to ensure accuracy.
  • Official Seal and Signatures:
    Each certificate is legally recognized and comes with official seals and signatures directly from the government agency. These can be used for drivers licenses, insurance, travel benefits and more.
  • Secure and Confidential:
    Your personal information is handled with the highest security and confidentiality.

Order Birth Certificates Online | VitalChek

VitalChek has partnered with vital record agencies within the United States to bring you the ability to order the vital records you need.

Need a different certificate?

We can also help you obtain your Death, Marriage, Divorce certificates. We can also help if you need certificates from other states. Order your certificate now..

We have direct relationships with state and local government agencies in the United States.

That means we can process your request quickly, affordably, and securely. The only thing faster is doing it in person, but that's not always possible or convenient, and that's why we're here.

Experienced

Authorized

Transparent

Secure

Fast

Hundreds of government agencies nationwide exclusively trust VitalChek for accepting their birth certificates and other vital record orders.

Our easy application process will help you find the right agency and certificate for your specific needs.

Why should you trust VitalChek?

4 million
vital documents processed per year

More than 35 years
of experience

100%
compliance with the highest level PCI standards to protect your personal information

ORDER NOW

 

10 Items Quietly Disappearing From Store Shelves


10 Items Quietly Disappearing From Store Shelves — Most Won't Exist Next Year - YouTube

Disappearing From Store Shelves: An Investigation Into Household Essentials and Supply Chain Realities

BLUF (Bottom Line Up Front):
A recent viral video claims ten household items are "disappearing" and urges consumers to stock up. Our investigation confirms that some of these items do face genuine supply challenges—but the story is more complex than a simple vanishing act. Hazmat shipping regulations and reduced demand have genuinely constrained availability of strike-anywhere matches, and borax faces potential regulatory restrictions in the United States modeled on the European Union's 2010 classification as a reproductive toxicant. However, most of the "critical" items investigated remain readily available through conventional channels, often at lower costs than sensationalized claims suggest.


Executive Summary

Over the past five years, we have witnessed documented shifts in retail stock of specific commodity items—notably strike-anywhere matches, which have seen multiple manufacturers including Ohio Blue Tips, Penley, and UCO discontinue production due to hazmat shipping restrictions imposed by FedEx, UPS, and USPS. We also document regulatory pressure on borax and the legitimate multifunctional uses of potassium permanganate. Yet our research reveals that many of the most touted "disappearing" items are either still widely available, available at modest cost, or have been mischaracterized in terms of their emergency utility.

This article presents what we found—grounded in scientific evidence, regulatory filings, and market data—and what consumers actually need to know about supply, efficacy, and regulatory status.


Item 1: Activated Charcoal for Poison Treatment

Claim: Activated charcoal powder is a life-saving poison treatment that "the WH recommends" and is being quietly replaced by capsules and removed from retail shelves.

What Research Shows:

Activated charcoal is a form of carbon that can bind other substances onto its surface in a process known as adsorption, and it is effective in adsorbing many types of poison in the gut, so the poison does not enter the body. The World Health Organization includes activated charcoal in its Model List of Essential Medicines, and it is most efficacious when given within one hour of ingestion of the toxin.

However, critical caveats apply. Over-the-counter products might not be as "activated" as the activated charcoal used in the ER, so they would be less effective. Over-the-counter activated charcoal typically comes in 250 mg tablets. Providing the same dose given in an emergency room (50–100 g) would require hundreds of tablets. More importantly, while activated charcoal is a proven treatment for certain severe ingestions in a clinical setting, experts overwhelmingly agree that it is not an effective or recommended home treatment for food poisoning. Additionally, activated charcoal can also bind to foods you have eaten, blocking the absorption of nutrients and medications you may have taken, reducing their effectiveness.

Bottom Line:
Activated charcoal powder does have documented emergency medical use for specific poisonings when administered in a hospital setting under professional guidance. Home use is not recommended by poison control experts. Availability: Still widely available online and through bulk suppliers at modest cost ($10–18 per pound). The claim that it is "disappearing" is not supported by current market data.


Item 2: Potassium Permanganate and the DEA Regulation Question

Claim: Potassium permanganate is regulated by the DEA, tracked when purchased in bulk, and "quietly" being removed from retail shelves because of government restrictions.

What Research Shows:

Potassium permanganate is listed as a List II chemical under DEA control, designated as one used in the manufacture of controlled substances. The domestic threshold for potassium permanganate is 55 kilograms, and the import/export threshold is 500 kilograms. The DEA learned that U.S. firms were exporting large quantities of potassium permanganate to cocaine-producing countries, and significant amounts of these chemicals ultimately were diverted to clandestine cocaine laboratories.

However, this chemical control program has been effective in reducing the supply of illicit methamphetamine and cocaine: when the Chemical Diversion and Trafficking Act went into effect in 1989, the quantity of these chemicals shipped to South America from the United States declined greatly.

Regarding the Glycerin-Permanganate Reaction:
The reaction between potassium permanganate and glycerol is highly exothermic, resulting rapidly in a flame, along with the formation of carbon dioxide and water vapor. The reaction can produce a bright, pinkish (lilac) flame for a few seconds, leaving a dark brown or black residue. However, while the potassium permanganate and glycerin reaction is popular in chemical demonstration videos and some survival manuals, it is generally not part of a standard fire-starting kit for most outdoor enthusiasts, and other methods such as friction-based fire making or using commercial fire starters are typically more reliable and less hazardous for everyday use.

Bottom Line:
Potassium permanganate is legal to own and remains available. DEA tracking does occur at the bulk level (over 55 kg domestically), but this does not affect home purchases of small quantities. The chemical does have documented fire-starting properties via glycerin contact, but is rarely recommended as a primary survival fire method. Availability: Online through chemical suppliers, aquarium stores, and veterinary suppliers. Cost: $8–15 per 100–500 gram bottle.


Item 3: Strike-Anywhere Matches—A Genuine Market Shift

Claim: Strike-anywhere matches are disappearing; only the Diamond brand remains, and Ohio Blue Tips have been permanently discontinued.

What Research Shows:

This claim is substantially accurate regarding market trends. Ohio Blue Tips, Penley Strike Anywhere Matches, and UCO Strike Anywhere Matches have all been discontinued. The only brand widely available in the US at this time is Diamond Greenlight Strike Anywhere Matches. The primary driver is shipping logistics: Strike anywhere matches are difficult to find and expensive to ship since they could accidentally ignite during shipping. The USPS prohibits shipping strike anywhere matches and both UPS and FedEx require that strike anywhere matches be shipped as hazardous materials and only via ground shipping.

However, Diamond Greenlight Strike Anywhere Kitchen Matches are available through multiple retail channels, including hardware stores, with boxes containing 300 matches and made from responsibly managed forests. Diamond Greenlight Strike Anywhere matches are available for purchase online through Walmart and Amazon.

Quality concerns: Some users report that the newer Diamond Greenlight formulation performs less reliably than older versions. The matches have been reformulated, possibly to circumvent shipping restrictions, which may account for the difficulty in lighting.

Bottom Line:
Strike-anywhere matches have genuinely consolidated to a single major brand due to hazmat regulations, and some older brands have been discontinued. However, Diamond Greenlight matches remain available through major retailers. Availability: Walmart, Amazon, Ace Hardware. Cost: $30–50 for 10 boxes (2,500 matches).


Item 4: Kerosene Heating Fuel—A Quiet Decline in Infrastructure

Claim: Gas stations are "ripping out" kerosene pumps due to low demand; the infrastructure is in "death spiral."

What Research Shows:

Kerosene remains a practical fuel for portable heaters, lamps, and cleaning applications, especially during power outages or in off-grid scenarios. While many consumers initially check hardware stores, these retailers often price kerosene at $10 to $15 per gallon. In contrast, numerous gas stations across the United States sell K-1 grade kerosene for just $4 to $5 per gallon.

Market data on station closures is less dramatic than the video claims. While some regional shortages have occurred (particularly during peak heating season or in areas with limited demand), no national shortage has been documented since 2021. In 2022, Maine experienced kerosene supply constraints driven by war in Ukraine and increased jet fuel demand. Despite fears that the Israel-Hamas War and the EU boycott on Russian fuel would lead to gas shortages, no US state experienced a shortage between 2023 and 2025.

Bottom Line:
Kerosene availability is declining in some regions due to low consumer demand and the shift to electric heating, but infrastructure has not collapsed. Seasonal availability and regional variation are real. Availability: Gas stations (check GasBuddy's kerosene locator), Tractor Supply, online suppliers. Cost: $4–5 per gallon at gas stations; $10–15 at hardware stores.


Item 5: Borax and Regulatory Headwinds

Claim: Borax is disappearing from shelves because the U.S. is "quietly" following Europe's reproductive toxicity ban.

What Research Shows:

Borax is currently legal in the United States and available in laundry sections nationwide. However, there is genuine regulatory pressure. Borax is a naturally occurring mineral composed of sodium, boron, oxygen, and water. The European Chemicals Agency (ECHA) has classified borax as a Substance of Very High Concern (SVHC) because studies, primarily conducted on rodents exposed to high doses, suggested adverse effects on fertility and developmental processes. In 2015, this led the ECHA to classify boron compounds as reproductive toxicants under Category 1B.

In contrast, in the United States, the Environmental Protection Agency acknowledges potential reproductive risks but does not classify borax as a known human toxin at typical exposure levels. As a result, it remains legal for over-the-counter sale.

Bottom Line:
Borax remains legal and available in the U.S., though it is restricted or banned in the EU and UK. U.S. consumer groups are pushing for restrictions, which could eventually result in tighter regulation, but no ban is imminent. Availability: Grocery stores (spice and laundry aisles), Indian grocery stores (as "fit curry"), online. Cost: $5–21 for a multi-year supply.


Item 6: Other Items Investigated

We also examined diatomaceous earth, wool military blankets, canning lids, paraffin lamp oil, and alum. All of these items remain available, though availability varies by region and some face minor market consolidation. Detailed findings:

  • Diatomaceous Earth (food grade): Available through tractor supply, livestock suppliers, and online. Cost: $15–25 per 15 lbs. Caution: Pool-grade DE poses silicosis risk; verify "food grade" labeling.
  • Wool Military Surplus Blankets: Available through surplus dealers, eBay, Coleman's military surplus. Cost: $25–50. Supply is finite due to discontinued military procurement, making this a legitimate collectible.
  • Canning Lids: Genuine Ball and Kerr lids available through big-box retailers and direct from manufacturers. Counterfeits exist, particularly on Amazon; the sealing compound on counterfeits may be too thin, and rim dimensions inconsistent. Cost: $0.08–0.15 per lid.
  • Paraffin Lamp Oil: Available through Lowe's, Walmart, and online. Cost: $32–48 per gallon. Genuine paraffin (not citronella) produces minimal soot and is safe for indoor use.
  • Alum: Used historically for water clarification and pickle-curing. Available in spice aisles and Indian grocery stores. Cost: $5–8 per pound.

What the Research Reveals About Supply Chains

Legitimate Trends:

  1. Consolidation: Multiple suppliers have exited commodity markets (matches, wool blankets, some brands of kerosene heaters) due to low profit margins, regulatory burden, or shifting consumer preferences.
  2. Hazmat & Shipping: Items classified as hazardous (strike-anywhere matches, potassium permanganate in quantity) face shipping restrictions that reduce retail availability and increase retail prices.
  3. Regulatory Divergence: The U.S. and EU have begun to regulate chemicals differently (borax, boric acid), and U.S. consumer groups are calling for harmonization. This creates genuine uncertainty for future availability.
  4. Market Efficiencies: Retailers have optimized inventory for high-turnover items. Low-demand specialty goods (kerosene, potassium permanganate, strike-anywhere matches) are stocked selectively or on order.

Overstated Claims:

The video presents these trends as a coordinated, imminent collapse. Our research shows that:

  • No government agency is "quietly" removing items from shelves.
  • Most items remain available through conventional online retailers, specialty suppliers, or regional retailers.
  • Prices have risen modestly in some categories (notably kerosene during peak season), but not by the multiples the video implies.
  • Supply chains remain functional for nearly all items listed, with regional variations.

Medical and Safety Considerations

Activated Charcoal:
The Utah Poison Control Center states that activated charcoal should only be given in health care facilities. They do not recommend at-home use of activated charcoal for poisonings. If a poisoning occurs, call Poison Control (1-800-222-1222 in the U.S.) before attempting home treatment.

Potassium Permanganate:
While the glycerin-permanganate reaction is documented, this activity requires the use of hazardous components and has the potential for hazardous reactions. Potassium permanganate is a powerful oxidizing agent that can explode on sudden heating. Small particles of potassium permanganate may be expelled from the reaction vessel. This is unsuitable for casual survival use and is primarily taught in military and advanced wilderness training.

Borax:
Borax is a skin and eye irritant, and inhaling the powder can lead to respiratory irritation. Most importantly, ingestion can cause severe poisoning, especially in children and pets, with potential for long-term health issues and even death.


Bottom Line for Consumers

If you are genuinely concerned about supply disruption:

  1. Focus on items with documented supply challenges: Strike-anywhere matches (buy through retail now), kerosene (seasonal purchasing is advisable in cold climates), wool blankets (genuine military surplus is finite).
  2. Ignore hype on items that are readily available: Activated charcoal powder, alum, borax, paraffin oil, diatomaceous earth, and canning lids remain available through normal retail and online channels.
  3. Understand regulatory risks: Borax may face future restriction in the U.S., but this is not imminent. Stay informed via EPA updates.
  4. Prioritize reliability: For emergency preparedness, conventional methods (commercial fire starters, standard lighter fluid, modern canning equipment) remain more reliable than chemical combinations or rare commodity items.
  5. Avoid false urgency: Viral content that frames supply chains as "collapsing" or "quietly" being dismantled is misleading. Real supply issues are typically driven by market economics, not conspiracy, and occur gradually over years.

Sources and Citations

Government and Regulatory:

Medical and Toxicology:

Chemistry and Chemical Safety:

Market and Product Availability:

Regulatory and Policy:


Methodology Note:
This report draws on peer-reviewed toxicology literature, government regulatory filings, current retail inventory data, and market analysis. Claims from the source video are fact-checked against primary sources, including academic literature, official agency guidance, and real-time market data. Where uncertainty exists, we note it. Recommendations prioritize evidence-based safety over sensationalism.


Last Updated: April 2026

 

Wednesday, April 8, 2026

How to Install Gemma 4 Locally with Ollama (2026 Guide)

 

By Vishnu

Gemma 4 is Google’s latest open-weight language model — a significant leap from Gemma 3 with better reasoning, longer context, and improved coding performance. Unlike cloud APIs, running it locally means zero data leaves your machine. Perfect for proprietary code, air-gapped environments, or just avoiding subscription fees.

Gemma 4 comes in four sizes: E2B and E4B for edge devices (phones, Raspberry Pi, IoT), and 26B MoE plus 31B Dense for workstations. All models are multimodal (vision + audio on edge models), support 140+ languages, and now use the permissive Apache 2.0 license.

:::note[TL;DR]

  • Gemma 4 comes in four sizes: E2B, E4B (edge/mobile), 26B MoE, and 31B Dense (workstation/server)
  • E2B/E4B run on phones, Raspberry Pi, Jetson Nano with 128K context
  • 26B MoE activates only 3.8B params for fast inference; 31B Dense for maximum quality with 256K context
  • All models are multimodal (vision + audio on edge) and support 140+ languages
  • Install Ollama, then ollama pull gemma4:27b — models download automatically on first use
  • Apple Silicon gets GPU acceleration; NVIDIA needs ~24GB+ VRAM for the 31B model
  • Now under Apache 2.0 license (not Google’s custom license) — truly open for commercial use :::

Prerequisites

Before installing Gemma 4, check your hardware:

Minimum (CPU only):

  • 4 GB RAM for E2B models (edge/IoT)
  • 8 GB RAM for E4B models
  • 16 GB RAM for 26B MoE models
  • 32 GB RAM for 31B Dense models

Edge/Mobile (E2B/E4B):

  • Runs on Raspberry Pi 4/5, NVIDIA Jetson Orin Nano
  • Android phones with 6GB+ RAM
  • iOS devices (via Core ML)
  • 128K context window

Better performance (GPU):

  • Apple Silicon Mac (M1/M2/M3/M4) — Metal acceleration works out of the box
  • NVIDIA GPU with 8+ GB VRAM for E4B models
  • NVIDIA GPU with 16+ GB VRAM for 26B MoE
  • NVIDIA GPU with 24+ GB VRAM for 31B Dense
  • 256K context window for 26B/31B models

Key Features:

  • Multimodal: Vision + audio understanding on all models
  • Multilingual: Native support for 140+ languages
  • Agentic: Native function calling and structured JSON output
  • License: Apache 2.0 (fully permissive for commercial use)
  • Context: 128K (E2B/E4B) or 256K (26B/31B) tokens

Install Ollama

If you don’t have Ollama yet, install it first:

macOS:

brew install ollama

Linux:

curl -fsSL https://ollama.com/install.sh | sh

Windows: Download from ollama.com. Runs as a background service.

Verify installation:

ollama --version

Download and Run Gemma 4

Ollama makes this trivial. Models download on first use and cache for future runs.

# Run the E2B model (edge/IoT, ~2GB, fastest on limited hardware)
ollama run gemma4:2b

# Run the E4B model (edge/IoT, ~3GB, better quality than E2B)
ollama run gemma4:4b

# Run the 26B MoE model (desktop, activates 3.8B params, fast inference)
ollama run gemma4:27b

# Run the 31B Dense model (workstation, maximum quality, 256K context)
ollama run gemma4:31b

The Scenario: You’re deploying an AI assistant on a Raspberry Pi 5 at a remote factory. You pull gemma4:2b, get local vision + audio processing with 128K context, and it all runs offline without internet. The E2B model handles OCR from camera feeds and voice commands natively.

First launch downloads the model weights:

  • E2B: ~2GB
  • E4B: ~3GB
  • 26B MoE: ~16GB (fits on 80GB H100 unquantized, ~7GB quantized)
  • 31B Dense: ~19GB (fits on 80GB H100 unquantized, ~8GB quantized)

Subsequent starts are instant.

Available Model Variants

Gemma 4 offers quantized variants for different VRAM constraints:

VariantEffective SizeVRAM NeededBest ForContext
gemma4:2b (E2B)~2 GB3-4 GBRaspberry Pi, IoT, phones128K
gemma4:4b (E4B)~3 GB4-6 GBEdge devices, Jetson Nano128K
gemma4:27b (26B MoE)~16 GB (activates 3.8B)12-16 GBFast desktop inference256K
gemma4:31b (31B Dense)~19 GB24+ GBMaximum quality, fine-tuning256K
gemma4:27b-q4_K_M~7 GB8-10 GBMid-range GPUs (26B MoE)256K
gemma4:31b-q4_K_M~8 GB10-12 GBHigh-end consumer GPUs256K

Key difference: The 26B MoE activates only 3.8 billion parameters during inference — delivering exceptional tokens/second while still having 26B total capacity. The 31B Dense uses all parameters for maximum quality.

Pull a quantized variant:

ollama pull gemma4:31b-q4_K_M

:::tip The q4_K_M quantization uses 4-bit precision with intelligent mixing. You lose ~2-3% quality but save 30-40% VRAM. Most users won’t notice the difference for everyday coding tasks. :::

Hardware-Specific Setup

Apple Silicon (M1/M2/M3/M4)

No configuration needed. GPU acceleration works automatically via Metal:

ollama run gemma4:12b

On an M2 Pro with 16GB unified memory, the 12B model runs at ~25 tokens/second. The 27B model also runs on M-series chips with 24GB+ RAM, though you may need to close other apps.

NVIDIA GPUs

Install the NVIDIA Container Toolkit for maximum throughput. Verify CUDA is available:

ollama ps  # Shows if GPU is being used

:::warning If you see “CUDA out of memory” errors, your model is too large for your VRAM. Kill the process with ollama stop gemma4:27b and switch to a smaller variant or quantized version. :::

CPU-Only Systems

Gemma 4 runs on CPU if you lack a compatible GPU. It’s slower but functional:

# Force CPU mode if needed
export OLLAMA_NO_GPU=1
ollama run gemma4:2b

Expect 2-5 tokens/second on a modern CPU for the E2B model. Usable for simple queries on edge devices.

Edge Devices (Raspberry Pi, Jetson Nano)

The E2B and E4B models are engineered specifically for edge:

# On Raspberry Pi 5 with 8GB RAM
ollama run gemma4:2b

# On NVIDIA Jetson Orin Nano
ollama run gemma4:4b

Features on edge:

  • Vision: Process camera frames locally for OCR, object detection
  • Audio: Native speech recognition and understanding
  • Offline: Works without internet after initial download
  • Low latency: Near-zero response time for real-time applications

Using the REST API

Ollama exposes an OpenAI-compatible API at localhost:11434:

Basic chat completion

curl http://localhost:11434/api/chat -d '{
  "model": "gemma4:31b",
  "messages": [
    { "role": "user", "content": "Explain recursion in Python" }
  ],
  "stream": false
}'

Generate (single prompt)

curl http://localhost:11434/api/generate -d '{
  "model": "gemma4:31b",
  "prompt": "Write a Python function to reverse a linked list",
  "stream": false
}'

OpenAI-compatible endpoint

Any library that works with OpenAI can point to Ollama:

from openai import OpenAI

client = OpenAI(
    base_url='http://localhost:11434/v1',
    api_key='ollama'  # required but ignored
)

response = client.chat.completions.create(
    model='gemma4:12b',
    messages=[{'role': 'user', 'content': 'Refactor this function'}]
)
print(response.choices[0].message.content)

Python SDK Usage

Install the official Ollama Python library:

pip install ollama

Basic usage:

import ollama

response = ollama.chat(
    model='gemma4:31b',
    messages=[
        {'role': 'user', 'content': 'Write a bash script to find large files'}
    ]
)
print(response['message']['content'])

Streaming for real-time output:

stream = ollama.chat(
    model='gemma4:31b',
    messages=[{'role': 'user', 'content': 'Tell me a joke'}],
    stream=True,
)

for chunk in stream:
    print(chunk['message']['content'], end='', flush=True)

IDE Integration

Continue.dev (VS Code / JetBrains)

Add to your Continue config:

{
  "models": [
    {
      "title": "Gemma 4 31B (Local)",
      "provider": "ollama",
      "model": "gemma4:31b",
      "apiBase": "http://localhost:11434"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Gemma 4 26B MoE Autocomplete",
    "provider": "ollama",
    "model": "gemma4:27b"
  }
}

The Scenario: You’re on a plane with no Wi-Fi. Open VS Code, hit Tab for autocomplete, and Gemma 4 suggests the next line. Local AI doesn’t need the internet.

Cursor

In Cursor settings, add a custom OpenAI-compatible model:

  • Base URL: http://localhost:11434/v1
  • Model: gemma4:31b

Claude Code

Pipe files to your local Gemma 4 instance:

claude -p "Review this code for bugs" < src/utils/parser.ts

Useful Commands

ollama list                  # show downloaded models
ollama pull gemma4:31b       # download a specific variant
ollama rm gemma4:27b         # remove a model to free space
ollama show gemma4:31b       # model info and parameters
ollama ps                    # show running models
ollama stop gemma4:31b       # stop a running model
ollama run gemma4:4b "prompt" # one-shot, non-interactive

Performance Comparison

Approximate tokens/second on different hardware:

HardwareE2BE4B26B MoE31B Dense
Raspberry Pi 5 (8GB)8 t/s4 t/sN/AN/A
M2 Pro (16GB)45 t/s35 t/s30 t/s15 t/s
RTX 4090 (24GB)90 t/s75 t/s65 t/s35 t/s
RTX 3060 (12GB)30 t/s25 t/s20 t/sN/A
CPU (i7-12700K)5 t/s3 t/s<1 t/s<1 t/s

Numbers are approximate — actual speed varies by prompt length and context window usage. The 26B MoE model activates only 3.8B parameters during inference, making it surprisingly fast for its size.

Prompting Tips

Gemma 4 responds well to direct, specific prompts:

For coding:

You are an expert Python developer. Write a clean, documented function that [task]. Include type hints and a docstring.

For explanation:

Explain [topic] as if I'm a senior developer who knows [related tech] but is new to this specific concept. Be concise.

For review:

Review this code for bugs, performance issues, and style violations. Rate each on severity (low/medium/high).

Troubleshooting

”Error: model not found”

Run ollama pull gemma4:12b first to download the weights.

Out of memory errors

Switch to a smaller model or quantized variant. Use Activity Monitor (macOS) or nvidia-smi (Linux) to check memory usage.

Slow performance

  • Verify GPU acceleration: ollama ps should show the model
  • Try a smaller model variant
  • Close other memory-heavy applications
  • Check thermal throttling on laptops

API connection refused

Ensure Ollama server is running:

ollama serve  # starts the server

Summary

  • Gemma 4 runs fully offline via Ollama — no API keys, no data leaks
  • Four sizes: E2B and E4B for edge/mobile (128K context), 26B MoE and 31B Dense for workstations (256K context)
  • 26B MoE activates only 3.8B parameters for fast inference; 31B Dense for maximum quality
  • Quantized variants (q4_K_M) save VRAM with minimal quality loss
  • Apple Silicon gets automatic GPU acceleration; NVIDIA needs sufficient VRAM
  • Multimodal: Vision + audio understanding on all models
  • Multilingual: Native support for 140+ languages
  • Apache 2.0 license — fully permissive for commercial use
  • OpenAI-compatible API works with existing tools and libraries

Frequently Asked Questions

What’s the difference between Gemma 3 and Gemma 4?

Gemma 4 improves reasoning, coding performance, and instruction following. The 31B Dense model ranks #3 on the Arena AI open-source leaderboard, outperforming models 20x its size. Key upgrades include:

  • Multimodal support (vision + audio) on all models
  • 140+ languages natively
  • 128K context (E2B/E4B) or 256K context (26B/31B)
  • Apache 2.0 license (was Google’s restrictive custom license)
  • Native function calling and agentic workflow support

Can I run Gemma 4 without internet after the initial download?

Yes. Once you ollama pull the model, it runs entirely offline. The weights are stored in ~/.ollama/models/. No cloud connection required for inference. This is ideal for air-gapped environments, privacy-sensitive work, or deployments without reliable internet.

Which Gemma 4 size should I choose?

  • E2B (2B effective): Raspberry Pi, IoT devices, phones, real-time edge processing with vision/audio
  • E4B (4B effective): Jetson Nano, Android devices, better quality than E2B while still edge-friendly
  • 26B MoE (Mixture of Experts): Desktop workstations, fast inference (activates only 3.8B params), coding assistants
  • 31B Dense: High-end GPUs, maximum quality, fine-tuning, complex reasoning tasks

How does the 26B MoE model work?

MoE (Mixture of Experts) means the model has 26 billion total parameters but only activates 3.8 billion during each inference pass. It routes each token to the most relevant “expert” sub-networks. This gives you fast tokens-per-second comparable to a 4B model, with the quality of a much larger model.

Can I use Gemma 4 for commercial projects?

Yes. Gemma 4 uses the Apache 2.0 license — the same permissive license used by Android, Kubernetes, and TensorFlow. You can use it commercially, modify it, distribute it, and even build proprietary products on top of it. No usage restrictions, no attribution requirements beyond the license text.

Using Google Dorks with Claude:

Claude-Generated Google Dorks: A Complete Operational Guide

Quick Answer: Yes, Claude Generates Excellent Dorks

Claude can generate targeted, effective Google dorks for virtually any reconnaissance objective. Claude understands:

  • Search operator syntax across Google, Bing, DuckDuckGo, Scholar, and specialized engines
  • Operator combinations that maximize precision
  • Domain targeting for institutional, government, and corporate reconnaissance
  • File type strategies for locating specific document formats
  • Evasion techniques for bypassing common protections
  • Multi-stage dork chains for progressive refinement
  • Ethical boundaries and legal use constraints

This guide walks you through requesting effective dorks, optimizing results, and building dork libraries for specific investigation types.


How to Request Google Dorks from Claude

Best Request Format (Template)

I need Google dorks to find [SPECIFIC INFORMATION/TARGETS].

Investigation Type: [e.g., institutional research, vulnerability assessment, 
                     competitive intelligence, academic collaboration mapping]

Target Scope: [e.g., specific company, government agencies, academic institutions, 
               geographic region, technology sector]

Information I'm Looking For:
- [Specific item 1]
- [Specific item 2]
- [Specific item 3]

Target Domains (if applicable):
- domain1.com
- domain2.edu
- *.gov

Constraints:
- Legal/ethical use only: [Yes/No - describe if special]
- Authorized reconnaissance: [Yes/No - explain context]
- Post-date: [if looking for recent information]
- File types: [PDF, DOCX, XLS, etc., or "all"]

Level of Specificity:
- Broad dorks (cast wide net)
- Narrow dorks (high precision, fewer results)
- Progressive dorks (start broad, refine down)

Please provide:
[ ] Basic dorks (5-10 queries to start)
[ ] Advanced dorks (operator combinations)
[ ] Multi-stage dorks (progressive refinement chain)
[ ] Bypass dorks (for heavily protected domains)

Types of Dorks Claude Can Generate

1. Basic Information Gathering Dorks

Request Example:

Generate dorks to find public information about Company X's research direction.

Target: tech-company.com
Looking for:
- Research papers or whitepapers
- Technology roadmaps
- Funding announcements
- Hiring/career pages indicating specialization

Prefer: High precision (fewer false positives)

Claude Output Example:

Basic Research Direction Dorks:

1. site:tech-company.com filetype:pdf "research"
2. site:tech-company.com intitle:"whitepaper" OR intitle:"technical report"
3. site:tech-company.com/research author:"researcher name"
4. site:tech-company.com intext:"2025" intext:"breakthrough" filetype:pdf
5. site:tech-company.com/blog "research" OR "technology" "2026"

2. Institutional Capability Assessment Dorks

Request Example:

Generate dorks for assessing a university's quantum computing research capability.

Target: stanford.edu
Information needed:
- Published quantum research papers
- Researcher affiliations in quantum field
- Funding/grant information
- Laboratory/facility references

Use: Academic analysis only

Claude Output Example:

University Quantum Research Assessment:

1. site:stanford.edu author:"quantum" filetype:pdf
2. site:stanford.edu intitle:"quantum computing" OR "quantum error correction"
3. site:stanford.edu intext:"NSF grant" OR "DARPA" intext:"quantum"
4. site:scholar.google.com author:"Stanford" intitle:"quantum" 2025
5. site:stanford.edu/research/labs intext:"quantum"

3. Vulnerability/Misconfiguration Discovery Dorks

Request Example:

Generate dorks to identify common web misconfigurations on government websites.

Target: *.gov domains (general search)
Looking for:
- Exposed directories
- Debug information
- Default credentials
- Backup files
- Configuration files

Use: Authorized penetration testing, responsible disclosure

Claude Output Example:

Government Website Misconfiguration Assessment:

1. site:*.gov intitle:"index of" filetype:pdf
2. site:*.gov inurl:admin OR inurl:backup OR inurl:test
3. site:*.gov filetype:sql OR filetype:bak OR filetype:conf
4. site:*.gov intitle:"error" intext:"database" OR intext:"password"
5. site:*.gov inurl:/api/ intext:"401" OR intext:"403"

⚠️ LEGAL NOTE: Only use on systems you own or have written authorization to test.

4. Competitor Intelligence Dorks

Request Example:

Generate dorks for competitive intelligence on a financial services firm.

Target: competitor-bank.com
Looking for:
- Strategic partnerships
- Technology stack indicators
- Market positioning
- Organizational structure (via author names)
- Acquisition targets or expansions

Use: Competitive market research, publicly available information only

Claude Output Example:

Financial Services Competitor Intelligence:

1. site:competitor-bank.com filetype:pdf "partnership" OR "collaboration"
2. site:competitor-bank.com intitle:"white paper" OR "market analysis"
3. site:competitor-bank.com intext:"acquisition" OR "merger" 2025-2026
4. site:competitor-bank.com author:"Chief Technology Officer" OR "CTO"
5. site:competitor-bank.com press OR news "expansion" OR "new market"

5. Academic Collaboration Mapping Dorks

Request Example:

Generate dorks to map research collaboration networks in machine learning.

Information Needed:
- Co-author relationships across institutions
- Multi-institutional research projects
- Cross-sector (academia-industry) collaborations
- Funding source identification

Use: Academic research analysis

Claude Output Example:

Machine Learning Collaboration Network Mapping:

1. author:"researcher name" source:"Nature" OR source:"ICML" 2025
2. site:scholar.google.com intitle:"machine learning" author:"institution1" co-author:"institution2"
3. site:arxiv.org intitle:"federated learning" intext:"2025" intext:"collaboration"
4. site:researchgate.net author:"researcher" "machine learning"
5. Multiple institutional affiliations: [institution1] [institution2] "joint research"

6. Multi-Stage Progressive Refinement Dorks

Request Example:

Generate a progressive dork chain to investigate a specific security vulnerability.

Starting broad, then progressively narrowing focus.

Topic: Vulnerable API endpoints
Goal: Find evidence of this vulnerability in the wild

Stages: Start broad → identify affected platforms → narrow to specific configurations

Claude Output Example:

STAGE 1 - Broad Discovery (Cast Wide Net)
1. inurl:"/api/" intext:"authentication failed"
2. filetype:pdf "API vulnerability" 2025

STAGE 2 - Platform-Specific (Narrow Down)
3. site:github.com inurl:"/api/auth" intext:"bypass"
4. site:stackoverflow.com intitle:"API" intitle:"vulnerability"

STAGE 3 - High-Precision (Deep Dive)
5. intitle:"CVE-2025-XXXXX" filetype:pdf
6. intext:"proof of concept" intext:"API authentication" filetype:py OR filetype:js

STAGE 4 - Evidence Collection (Verification)
7. site:reddit.com OR site:twitter.com "CVE-2025-XXXXX" "working"

Advanced Dork Generation Techniques

Technique 1: Operator Chaining for Precision

Request to Claude:

Generate tightly chained dorks combining 4-6 operators for maximum precision.

Target: Find confidential internal documents accidentally exposed on a company website
Company: example-corp.com
Document indicators: "confidential", "internal", "not for distribution"
File types: PDF, DOCX, XLS
Time frame: Documents from 2024-2026

Generate: 3 different operator chains, increasing specificity

Claude Output:

Tight Operator Chains for Confidential Document Discovery:

Chain 1 (Broad):
site:example-corp.com filetype:pdf "confidential"

Chain 2 (Medium Precision):
site:example-corp.com filetype:(pdf OR docx OR xls) "confidential" 
OR "internal" intext:"not for distribution"

Chain 3 (High Precision):
site:example-corp.com filetype:pdf intitle:"confidential" intext:"2025" 
-inurl:public -inurl:approved

Technique 2: Negation Operators (Exclusion Refinement)

Request to Claude:

Generate dorks that intentionally EXCLUDE common false positives.

Topic: Finding real proofs-of-concept for a vulnerability
But exclude: marketing material, theoretical papers, vendor websites

Query: vulnerability "proof of concept"
Exclude: marketing, vendor, theoretical, press release

Generate: 3 dorks using negation to filter noise

Claude Output:

Negation-Refined Dorks:

1. "proof of concept" vulnerability -site:vendor.com -"marketing" -"white paper"

2. filetype:pdf "PoC" OR "proof of concept" -"theoretical" -"proposed" 
   -site:*.marketing.com

3. (github.com OR pastebin.com) "exploit" "working" -"educational" 
   -"demonstration only"

Technique 3: Synonym & Variation Generation

Request to Claude:

Generate dorks using multiple synonyms and terminological variations.

Topic: Find information about AI model training datasets
Variations needed: "training data", "dataset", "corpus", "training set"
Different terminology: "model training", "fine-tuning", "training pipeline"

Generate: Multiple dorks using different terminology to catch all variations

Claude Output:

Synonym Variation Dorks:

1. ("training data" OR "dataset" OR "training set" OR "corpus") 
   filetype:pdf AI 2025

2. ("model training" OR "fine-tuning" OR "training pipeline") 
   intext:"dataset" 2025

3. site:github.com ("training-data" OR "train-dataset" OR "corpus") 
   AI model

4. intext:"training" intext:"dataset" intext:"model" filetype:csv OR filetype:json

Technique 4: Cross-Engine Dorks

Request to Claude:

Generate dorks for multiple search engines simultaneously.

Topic: Find exposed database backups
Engines needed: Google, Bing, DuckDuckGo, plus specialized engines

Include: Syntax variations for each engine's operator support

Claude Output:

Cross-Engine Database Backup Discovery:

GOOGLE:
site:* filetype:(sql OR bak OR backup) "database" intext:"2025"

BING:
site:* filetype:(sql OR bak) "database backup" contains:credentials

DUCKDUCKGO:
("database backup" OR "db_backup") filetype:(sql OR bak) 
[Note: DDG has limited operator support, use keywords heavily]

GITHUB (specialized):
path:backup filename:*.sql OR filename:*.bak intext:"database"

SHODAN (specialized):
"MySQL" OR "PostgreSQL" port:3306 country:US

Step-by-Step: Requesting Dorks from Claude

Complete Workflow Example

Step 1: Define Your Objective Clearly

Your Input to Claude:

Objective: Map the research-to-industry pipeline for quantum computing startups.

Specifically find:
- University quantum labs publishing research
- Researchers with dual affiliations (academia + startup)
- Startup hiring announcements targeting quantum researchers
- Published research by startup researchers

Target domains:
- Top 10 quantum startups (list provided)
- Top 50 universities with quantum programs (list provided)

Use case: Market analysis and trend identification (public information only)

Preference: Progressive dorks (start broad, then narrow)

Step 2: Claude Generates Initial Set

Claude's Output:

STAGE 1: University Quantum Research Discovery
1. site:*.edu intitle:"quantum" filetype:pdf 2025-2026
2. site:berkeley.edu OR site:stanford.edu OR site:mit.edu author:"quantum"
3. site:scholar.google.com "quantum computing" university affiliation 2025

STAGE 2: Researcher Dual Affiliation Detection
4. author:"researcher name" site:*.edu AND site:*.com filetype:pdf
5. intitle:"quantum" author:"Dr. Name" (university OR startup affiliation)

STAGE 3: Startup Talent Acquisition Signals
6. site:startup-name.com "hiring" "quantum" OR "researcher" filetype:pdf
7. site:startup-name.com/careers OR site:startup-name.com/research author:"University"

STAGE 4: Startup-Published Research
8. author:"Startup Company Name" source:"Nature" OR source:"arXiv"
9. site:arxiv.org intext:"Startup Name" intitle:"quantum"

Step 3: Refine Based on Results

Your Follow-up:

Good start. The Stage 1 results are too broad (getting unrelated 
quantum papers). 

Can you:
1. Make Stage 1 more specific - add keywords like "quantum computing" 
   AND ("error correction" OR "gate" OR "qubit")
2. Generate additional dorks for finding investor documents/announcements
3. Add dorks for GitHub/code repositories from quantum startups
4. Include dorks for academic conference speaker affiliations (where 
   dual-affiliated researchers might present)

Step 4: Claude Iteratively Refines

Claude's Refined Output:

REFINED STAGE 1: University Quantum Computing Research (Specific)
1. site:*.edu intitle:"quantum computing" ("error correction" OR "qubit" 
   OR "quantum gate") filetype:pdf 2025-2026

ADDITIONAL: Investor & Funding Signals
10. site:startup-name.com filetype:pdf "funding" OR "Series A" OR "Series B"
11. site:crunchbase.com "quantum" startup filetype:pdf

ADDITIONAL: Research Code & Repositories
12. site:github.com/startup-name quantum OR research
13. site:github.com "quantum" intext:"2025" intext:"startup name"

ADDITIONAL: Conference Speaker Affiliations
14. site:conferencename.org/speakers "quantum" "University" AND "Startup"
15. site:arxiv.org speakers:"University" AND author:"company name"

Step 5: Use, Test, Refine in Practice

Your Report Back:

Results from your refined dorks:

- Stage 1 now working well, found 47 relevant papers
- Investor signals dorks returned some results
- GitHub dorks were too broad, need to narrow to specific repos
- Conference speaker dorks found dual affiliations!

Can you:
1. Generate dorks specifically for GitHub "quantum" projects containing 
   both university and startup names in repo descriptions
2. Add dorks for LinkedIn search (academic-industry collaborations)
3. Create dorks for finding patents filed jointly by university-startup pairs

Best Practices for Claude Dork Requests

✅ DO:

  1. Be Specific About Target
    • ✅ "site:stanford.edu" not ❌ "Stanford University"
    • ✅ "*.gov" not ❌ "government websites"
  2. Clarify Information Objective
    • ✅ "Find papers on machine learning vulnerabilities published after 2025"
    • ❌ "Find ML stuff"
  3. Specify File Types
    • ✅ "filetype:(pdf OR docx OR pptx)"
    • ❌ "documents"
  4. Document Your Use Case
    • ✅ "Authorized security research on my company's domain"
    • ❌ "Just looking for stuff"
  5. Request Multiple Strategies
    • ✅ "Generate 3 different operator combinations with different precision levels"
    • ❌ "Give me dorks"
  6. Iterate and Refine
    • ✅ Test results, report back, ask for adjustments
    • ❌ Accept first results without testing

❌ DON'T:

  1. Ask for illegal dorks (without context)
    • ❌ "Find someone's private information"
    • ✅ "Find publicly available corporate information for competitive analysis"
  2. Request vague targets
    • ❌ "Search everything for anything"
    • ✅ "Search Tesla.com for 2025 product roadmap documents"
  3. Ignore legal/ethical boundaries
    • ❌ "Generate dorks to bypass authentication"
    • ✅ "Generate dorks for vulnerability assessment (authorized)"
  4. Use dorks for unauthorized access
    • Even if Claude generates it, using dorks to access systems you're not authorized to test violates CFAA
  5. Assume first results are optimal
    • Test, validate, refine with Claude's help

Real-World Examples: Full Dork Generation Sessions

Example 1: Academic Research Collaboration Mapping

Your Request:

I'm analyzing collaboration patterns in AI safety research.
I need dorks to find:
1. Co-authored papers on AI safety from 2024-2026
2. Researchers publishing with institutions in different countries
3. Papers acknowledging specific funding sources (OpenAI, Anthropic, NSF)
4. Cross-sector (academia-corporate) research teams

Target: Global institutions (no geographic limit)
Use: Academic research analysis

Generate: Progressive dorks from broad to narrow

Claude Response:

AI SAFETY RESEARCH COLLABORATION MAPPING

STAGE 1 - Broad Discovery:
1. intitle:"AI safety" OR "artificial intelligence safety" filetype:pdf 2024-2026
2. intitle:"alignment" OR "interpretability" source:"Nature" OR source:"ICML"

STAGE 2 - International Collaboration:
3. author:"researcher" site:*.edu author:"international" OR 
   site:*.org (UK OR EU OR Asia) "AI safety"
4. site:scholar.google.com "AI safety" multiple country affiliations

STAGE 3 - Funding Source Identification:
5. intext:"funded by OpenAI" OR "Anthropic" OR "NSF" filetype:pdf "AI safety"
6. intext:"acknowledgments" intext:"OpenAI" OR "MIRI" OR "Center for AI Safety" 
   filetype:pdf

STAGE 4 - Cross-Sector Teams:
7. author:"UC Berkeley" AND author:"OpenAI" OR author:"DeepMind" 
   "AI safety" filetype:pdf
8. site:arxiv.org "AI safety" (site:berkeley.edu OR site:stanford.edu) 
   co-author:"industry"

Example 2: Government Procurement & Policy

Your Request:

Researching federal cybersecurity procurement trends.

Looking for:
1. RFP documents (request for proposal) or solicitations
2. Contract awards in cybersecurity domain
3. Federal agency technology strategy documents
4. Congressional testimony on cybersecurity policy

Target: All federal agencies (*.gov)
Time: 2025-2026 only
Use: Policy analysis, public records

Generate: Specific dorks for each information type

Claude Response:

FEDERAL CYBERSECURITY PROCUREMENT ANALYSIS

RFP & SOLICITATION DISCOVERY:
1. site:*.gov filetype:pdf "request for proposal" OR "RFP" 
   "cybersecurity" 2025-2026
2. site:sam.gov "cybersecurity" OR "security" filetype:pdf
3. site:*.gov inurl:procurement "cybersecurity" 2025-2026

CONTRACT AWARDS:
4. site:usaspending.gov "cybersecurity" "contract award" 2025-2026
5. site:*.gov filetype:pdf "contract" "cybersecurity" 
   intext:"awarded to"

STRATEGY DOCUMENTS:
6. site:*.gov filetype:pdf "cybersecurity strategy" OR "cyber strategy" 2025-2026
7. site:dod.gov OR site:defense.gov "cyber" filetype:pdf "strategy"

CONGRESSIONAL TESTIMONY:
8. site:congress.gov "cybersecurity" "testimony" filetype:pdf 2025-2026
9. site:*.gov/hearings "cyber" OR "cybersecurity" 2025-2026

Example 3: Competitive Technology Intelligence

Your Request:

Competitive intelligence on competitor's AI capabilities.

Target Company: example-ai-corp.com
Looking for:
1. Published research papers (team authored by employee)
2. GitHub repositories indicating technical approach
3. Patent filings
4. Hiring/recruitment (what skills they're seeking)
5. Partnership announcements

Legal context: Publicly available information only; we operate in competitive market

Generate: Organized by information type, with execution order

Claude Response:

COMPETITIVE TECHNOLOGY INTELLIGENCE: EXAMPLE-AI-CORP

EXECUTION ORDER: Start with publication discovery, then expand

1. RESEARCH PUBLICATIONS:
   a. site:scholar.google.com author:"example-ai-corp" 2024-2026
   b. site:arxiv.org author:"example-ai-corp" OR intext:"example-ai-corp"
   c. author:"researcher" site:example-ai-corp.com filetype:pdf "research"

2. GITHUB/TECHNICAL REPOSITORIES:
   d. site:github.com/example-ai-corp
   e. site:github.com intext:"example-ai-corp" "research" OR "AI" OR "model"
   f. site:huggingface.co owner:"example-ai-corp"

3. PATENT FILINGS:
   g. site:patents.google.com "example-ai-corp" OR assignee:"example-ai-corp" 
      2025-2026
   h. site:uspto.gov assignee:"example-ai-corp" "artificial intelligence"

4. HIRING & TALENT SIGNALS:
   i. site:example-ai-corp.com/careers "machine learning" OR "researcher"
   j. site:linkedin.com/jobs "example-ai-corp" "research" OR "PhD"
   k. site:example-ai-corp.com "now hiring" OR "job opening" "AI" OR "ML"

5. PARTNERSHIPS & ANNOUNCEMENTS:
   l. site:example-ai-corp.com/blog OR site:example-ai-corp.com/news 
      "partnership" OR "collaboration"
   m. site:example-ai-corp.com filetype:pdf "partnership" OR "strategy"
   n. intext:"example-ai-corp" "partnership" OR "collaboration" 2025-2026

Building Your Personal Dork Library

Claude can help you build reusable dork libraries organized by investigation type:

Request Template:

Create a dork library for [investigation type].

Investigation Type: [e.g., Vulnerability Assessment, Competitive Intelligence, 
                     Academic Research Mapping]

Categories Needed:
1. [Category 1]
2. [Category 2]
3. [Category 3]

For each category, generate:
- Broad dorks (wide net, many results)
- Medium precision dorks (balanced)
- Narrow precision dorks (few but highly relevant results)

Format: Organized by category, with usage notes

Include: Which search engines work best for each dork

Example Output:

DORK LIBRARY: VULNERABILITY ASSESSMENT

Category 1: Exposed Configuration Files
Broad:   filetype:(conf OR config OR ini OR xml) "password" OR "key"
Medium:  site:*.com filetype:(conf OR config) "database" OR "API"
Narrow:  site:target.com filetype:(conf OR config) intext:"credentials"
Best Engines: Google, Bing

Category 2: Default Credentials & Documentation
Broad:   "default password" OR "default username" filetype:pdf
Medium:  "admin:admin" OR "root:password" intitle:"installation" OR "setup"
Narrow:  site:target.com intitle:"default" intext:"username" intext:"password"
Best Engines: Google

Category 3: Directory Listing & Exposed Directories
Broad:   intitle:"index of" filetype:pdf OR filetype:txt
Medium:  intitle:"index of" site:target.com -inurl:public
Narrow:  intitle:"index of /admin" OR intitle:"index of /backup"
Best Engines: Google, Bing

Advanced: Claude-Assisted Dork Automation

Claude can not only generate dorks but help you automate their execution:

Request:

Generate both:
1. A set of 20 Google dorks for finding exposed credentials
2. A Python script that executes these dorks in sequence, parses results, 
   and exports to CSV

Include: Rate limiting, error handling, CSV output format with columns:
   - URL, dork_used, result_type, date_found, content_snippet

Claude Output:

python
import time
import requests
from selenium import webdriver
from selenium.webdriver.common.by import By
import csv
from datetime import datetime

DORKS = [
    'filetype:pdf "password" "username"',
    'filetype:xlsx "credentials" OR "access"',
    'intitle:"index of" filetype:sql',
    # ... 17 more dorks
]

results = []

for dork in DORKS:
    # Execute Google search
    # Parse results
    # Extract URLs and snippets
    # Rate limit (5 second delay between searches)
    # Append to results CSV
    
# Export to CSV with structured format

Troubleshooting Claude Dork Generation

Problem: "Claude generates dorks but results are empty"

Solution:

Tell Claude:
"The dorks you generated for [target] returned 0 results. 
The domain definitely exists and has the content I'm looking for.

Can you:
1. Suggest why these might return empty (domain blocks indexing? operators not supported?)
2. Generate alternative dork variations
3. Suggest alternative search engines that might index this better
4. Propose manual verification strategies"

Problem: "Dorks return too many results (not precise enough)"

Solution:

Tell Claude:
"These dorks returned 10,000+ results, most irrelevant.

Can you add more specific operators:
- Exclude these false positive keywords: [list]
- Narrow to these specific file types: [list]
- Add date ranges: [dates]
- Add more specific keyword combinations"

Problem: "I need dorks for a specialized search engine (not Google)"

Solution:

Tell Claude:
"Generate dorks for [specialized search engine] not Google.

The engine's supported operators are: [list operators]
Here's example syntax they support: [examples]

Generate dorks using only these supported operators for finding: [target]"

Legal & Ethical Considerations

When Requesting Dorks, Always Clarify:

  1. Authorization
    • ✅ "Authorized security research on systems I own"
    • ✅ "Competitive intelligence on publicly available information"
    • ❌ "How to find someone's private information"
  2. Use Case
    • ✅ "Academic research"
    • ✅ "Bug bounty program (authorized)"
    • ✅ "OSINT investigation"
    • ❌ "Bypass security systems"
  3. Target Scope
    • ✅ "*.gov domain (any government site)"
    • ✅ "specific-company.com (only this domain)"
    • ❌ "Find anyone's personal data"

Claude will help you generate powerful dorks but expects ethical use. If your request has any ambiguity about authorization, Claude will ask for clarification.


Quick Reference: Request Formula

[OBJECTIVE] + [TARGET] + [INFORMATION TYPE] + [CONSTRAINTS] + [FORMAT PREFERENCE]

Example:
"Find security misconfigurations (OBJECTIVE) on government websites 
(TARGET) by looking for exposed directories and debug info (INFORMATION TYPE), 
authorized testing only (CONSTRAINTS), generate as progressive dork chain 
from broad to narrow (FORMAT PREFERENCE)."

Summary: The Power of Claude-Generated Dorks

AdvantageExplanation
PrecisionClaude understands operator nuances and combines them optimally
SpeedGenerate 20+ dorks in seconds vs. manual research
IterationRefine based on real results; Claude adapts
LearningClaude explains WHY each operator does what
AutomationClaude can generate scripts to execute dorks
Cross-EngineGenerate syntax for Google, Bing, Scholar, specialized engines
StrategyProgressive refinement chains, multi-stage investigations
DocumentationClaude provides usage notes and success indicators

Key Takeaway: Claude isn't just a tool for generating dorks—it's a strategic research partner that understands reconnaissance methodology and can generate, refine, test, and optimize dorks for your specific investigation objectives.

Use this to dramatically accelerate your OSINT workflow.

 

A Practical Guide to Providing the Results to Claude

Why Claude Has Limitations Finding Certain Information

Claude has several knowledge access constraints that your dorking research can help overcome:

  1. Knowledge Cutoff (January 2025) – Claude's training data ends in January 2025, so any information published after that date is inaccessible without assistance
  2. No Real-Time Indexing – Claude doesn't continuously crawl the web like search engines do
  3. Limited Web Search Activation – Claude only uses web search tools when explicitly activated or when it recognizes current information is needed
  4. Search Algorithm Differences – Your targeted dorking queries often succeed where Claude's generic searches might return surface-level results
  5. Paywalled and Restricted Content – Academic papers, government databases, and subscription content that you can locate aren't directly accessible to Claude
  6. Search Result Ranking – Claude's web searches may not prioritize the obscure, deeply-indexed content that dorking reveals

Method 1: Direct Pasting of Content (Simple & Immediate)

Best For:

  • Short articles or documents you've found
  • Blog posts, news articles
  • PDFs you've extracted text from
  • Research papers accessible online

How to Do It:

Step 1: Find the content using your dorking query

Example dork: site:*.edu filetype:pdf "quantum error correction" intext:"2025"

Step 2: Copy the content or key passages

  • If it's a PDF: Copy text directly, or use OCR tools like:
    • Built-in: macOS Preview, Windows 10 Photos app
    • Online: OCR.space, Free Online OCR
    • Command-line: pdftotext file.pdf - (Linux/Mac)

Step 3: Paste into Claude with context

I found this research paper using a targeted Google dork search 
(site:*.edu filetype:pdf "quantum error correction" 2025):

[PASTE FULL TEXT OR KEY SECTIONS HERE]

Based on this source material, can you help me understand...

Advantages:

  • Immediate and direct
  • Claude sees exact content
  • No interpretation lag
  • Works with any text format

Disadvantages:

  • Manual labor (copy/paste)
  • Token usage increases with large documents
  • May hit length limits with very large papers

Method 2: Structured Citation with URL (Most Professional)

Best For:

  • Online-accessible articles and papers
  • Blog posts with stable URLs
  • Government publications
  • Academic repositories

How to Do It:

Step 1: Locate content via dorking

Step 2: Extract clean URL

  • Remove tracking parameters (utm_*, fbclid, etc.)
  • Keep the core domain and path
  • Test the URL in incognito mode to verify public access

Step 3: Present to Claude with Context

I used a Google dork search to find this document:

Query: site:government.edu filetype:pdf "vulnerability assessment" 2025

Source URL: [URL HERE]
Source Type: Government Report / Academic Paper / News Article
Date Found: April 2026
Publication Date: [if available]

Key passages from the document:
[1-2 key quotes or paragraphs]

Based on this source, can you analyze...

Claude's Web Search Capability:

If Claude has web search enabled, it can fetch and analyze the URL:

Here's a research paper I found through targeted dorking. 
Can you fetch and summarize this URL?

[URL]

Claude will often use web_fetch to retrieve and analyze the content.

Advantages:

  • Verifiable and traceable
  • Claude can independently fetch if search is enabled
  • Academic citation format
  • Minimal token usage for the reference itself

Disadvantages:

  • Requires URL to remain public
  • Content may be paywalled
  • URL may change or disappear

Method 3: Structured Data Export (Best for Analysis)

Best For:

  • Bulk research results
  • Comparative analysis
  • Multi-source investigations
  • Data-heavy findings

How to Do It:

Step 1: Conduct comprehensive dorking search

Example: Multiple queries for competitor analysis
- site:competitor.com filetype:pdf "security"
- site:competitor.com/research author:"Dr. Chen"
- site:competitor.com intitle:"whitepaper" intext:"2025"

Step 2: Organize results in structured format (CSV, JSON, or Markdown Table)

CSV Format:

csv
URL,Title,Publication_Date,Domain,Research_Type,Key_Finding
https://example.com/paper1.pdf,Quantum Advances 2025,2025-03-15,example.edu,Research Paper,"New error correction method"
https://example.com/blog/ai-trends,AI Trends Report,2025-04-01,example.com,Blog Post,"Federated learning gains adoption"

JSON Format:

json
{
  "search_query": "site:*.edu filetype:pdf quantum 2025",
  "results": [
    {
      "url": "https://example.edu/quantum-2025.pdf",
      "title": "Quantum Computing Advances",
      "date": "2025-03-15",
      "source_type": "academic_paper",
      "key_excerpt": "Recent developments in error correction..."
    }
  ]
}

Markdown Table:

markdown
| URL | Title | Date | Type | Key Finding |
|-----|-------|------|------|-------------|
| [Paper](https://...) | Quantum Advances | 2025-03-15 | Research | Error correction breakthrough |
| [Report](https://...) | AI Trends | 2025-04-01 | Analysis | Federated learning adoption |

Step 3: Provide to Claude with Query Context

I conducted targeted dorking searches on academic institutions and research 
repositories for [topic]. Here are the structured results:

[PASTE TABLE/JSON HERE]

Research methodology:
- Queries used: [list dorks]
- Date range: 2025-01
- Geographic focus: US Government and Universities
- Keywords: quantum, error correction, 2025

Analysis request:
Can you synthesize these findings and identify emerging trends?

Advantages:

  • Organized and analyzable
  • Easy to import into Claude for comparison
  • Clearly documented research methodology
  • Excellent for multi-source synthesis

Disadvantages:

  • More preparation time
  • Requires structured thinking
  • Manual organization required

Method 4: File Upload (Most Comprehensive)

Best For:

  • Long documents (research papers, reports)
  • Multiple files needing cross-reference
  • PDFs with formatting/images you want preserved
  • Detailed institutional documents

How to Do It:

Step 1: Locate and Download Files via Dorking

bash
# Example: Download PDFs matching a dork
# Using wget or curl in terminal:
wget -e robots=off -r -A.pdf 'https://example.edu/research/'

# Or manually download key documents found via dork queries

Step 2: Upload to Claude

Click the attachment/paperclip icon in Claude and upload:

  • PDF files
  • Text files (.txt)
  • Word documents (.docx)
  • Spreadsheets (.xlsx)

Step 3: Reference Uploaded Content

I've uploaded [3] research papers I found using these dork queries:

Query 1: site:*.edu filetype:pdf "machine learning" "2025" 
  → Downloaded: ml_paper_2025.pdf

Query 2: site:github.com intext:"vulnerability" repo:[specific]
  → Downloaded: vulnerability_analysis.pdf

Query 3: site:arxiv.org intitle:"federated learning"
  → Downloaded: federated_learning_preprint.pdf

Questions:
1. What are the key methodological differences across these papers?
2. Can you identify common authors/affiliations?
3. What gaps exist in this research area?

Advantages:

  • Claude can see formatting, images, tables
  • Handles complex documents well
  • No reliance on external URLs
  • Works with paywalled content you've obtained

Disadvantages:

  • File size limits (varies by platform)
  • Requires downloading files first
  • Token usage increases with file size

Method 5: Research Summary with Direct Evidence Links

Best For:

  • Collaborative research projects
  • Comprehensive investigations
  • Peer review and verification
  • Publishing-ready analysis

How to Do It:

Step 1: Conduct comprehensive dorking investigation

Step 2: Create annotated research document

markdown
# Investigation: [Topic]

## Research Methodology
**Dorking Queries Used:**
1. site:competitor.com filetype:pdf "technology stack"
2. author:"Dr. Jane Chen" intitle:"AI architecture"
3. site:arxiv.org intitle:"[specific research]" after:2025-01-01
4. site:github.com inurl:research intext:"confidential" -site:public

## Key Findings

### Finding 1: Corporate Research Direction
**Evidence Source:** [URL from dork query]
**Dork Used:** site:competitor.com filetype:pdf
**Direct Quote:** "[Relevant passage from document]"
**Analysis:** This suggests the organization is investing in...

### Finding 2: Academic Collaboration Network
**Evidence Source:** [Author profiles from Scholar dork]
**Dork Used:** author:"Dr. Chen" source:"Nature"
**Network Map:** 
- Dr. Chen (Institution A) → Co-author Prof. Kumar (Institution B)
- Prof. Kumar → Co-author Dr. Patel (Tech Company C)
**Implication:** This reveals the research-to-industry pipeline...

## Verification Protocol
All findings cross-referenced across [N] independent sources.
URL stability verified: [dates]
Content archived at: [Internet Archive links if available]

Step 3: Provide to Claude for Synthesis

I've completed a detailed investigation into [topic] using 
targeted dorking queries. I'm providing:

1. Complete annotated findings with direct evidence links
2. Methodology documentation
3. Source URLs (all verified and public)
4. Cross-reference map

Can you help me:
- Synthesize these findings into a cohesive analysis?
- Identify any gaps or contradictions?
- Suggest additional verification queries?

Advantages:

  • Highly credible and verifiable
  • Clear methodology transparency
  • Easy for others to replicate
  • Academic/professional standard

Disadvantages:

  • Time-consuming to prepare
  • Requires careful documentation
  • Large submission size

Method 6: Collaborative Research with Web Search

Best For:

  • Real-time collaborative investigation
  • Leveraging Claude's search alongside your dorking
  • Iterative research processes

How to Do It:

Step 1: You conduct targeted dorking

Step 2: Share findings, then request Claude search

I found these results using dorking:
- [Finding 1 with URL]
- [Finding 2 with URL]

I also notice a gap in current information on [specific topic].
Can you conduct a web search to find more recent updates on:

Query suggestion: "[Related topic]" 2026

This would help verify/extend my findings.

Step 3: Claude Conducts Web Search

Claude can use web_search to:

  • Find more recent information
  • Verify your dork results
  • Identify contradicting evidence
  • Discover complementary sources

Advantages:

  • Combines human precision (dorking) with AI breadth (search)
  • Real-time verification
  • Iterative refinement

Disadvantages:

  • Requires multiple turns
  • May consume search quota

Best Practices for Providing Dorking Results

1. Always Document Your Methodology

Dorking Query Used: [exact query]
Search Engine: Google Scholar / Google / Bing
Date Conducted: YYYY-MM-DD
Number of Results: [count]
Keywords Filtered By: [what you searched for]

2. Verify Content Authenticity

  • Check URL for SSL/HTTPS
  • Verify domain legitimacy (not typosquatting)
  • Test URL in incognito/private mode
  • Cross-reference information across multiple sources

3. Preserve Access

  • Use Wayback Machine to archive (archive.org)
  • Screenshots for ephemeral content
  • Note publication/access dates
  • Document if content is paywalled vs. open

4. Provide Context

Context for this finding:
- Why I searched for this: [your reasoning]
- How this connects to the larger question: [connection]
- Why Claude's initial search might have missed this: [gap explanation]
- What makes this source authoritative: [credibility]

5. Highlight the Novel/Recent Information

Claude's knowledge cutoff: January 2025
This source publication date: March 2026
Why Claude wouldn't have found this: Published after training cutoff

Practical Examples

Example 1: Recent Academic Research

Your Dorking Search:

site:arxiv.org intitle:"quantum" intitle:"2026" intext:"error correction"

Found Paper: https://arxiv.org/abs/2604.xxxxx

Providing to Claude:

I found a recent preprint using academic dorking that's published 
AFTER your knowledge cutoff:

Source: arXiv preprint repository
Dork: site:arxiv.org intitle:"quantum" intitle:"2026"
URL: [arxiv URL]
Publication: April 2026

Can you fetch and analyze this for me?

Example 2: Competitor Intelligence

Your Dorking Search:

site:competitor.com filetype:pdf "research" "2025-2026"
author:"Chief Technology Officer"

Providing to Claude:

I conducted institutional reconnaissance using dorking on a technology 
company's domain. I found these research documents they published:

1. [whitepaper URL] - Title, Date
   Key passage: "[quote]"
   
2. [research blog] - Title, Date
   Key passage: "[quote]"

Analysis request: Based on these publications, what can we infer 
about their R&D direction and capability development?

Example 3: Government/Institutional Policy

Your Dorking Search:

site:*.gov filetype:pdf "cybersecurity policy" "2026"
intext:"critical infrastructure"

Providing to Claude:

I retrieved government cybersecurity policy documents published in 2026 
(after your training cutoff) using targeted site searches:

Document 1: [URL] - Published [date]
Relevant section: "[paste key policy language]"

Document 2: [URL] - Published [date]
Relevant section: "[paste key policy language]"

Request: How do these new policies compare to the 2024-2025 
frameworks I'm familiar with?

Troubleshooting: When Claude Still Can't Access Content

Problem: URL returns "Access Denied" or "403"

Solution:

Problem: Paywalled Content

Solutions:

  • Screenshot the content and upload as image
  • Copy accessible sections (often abstracts, introductions)
  • Use institutional proxies if you have access (but don't share credentials)
  • Check preprint servers (arXiv, bioRxiv) for free versions

Problem: Large Document (Long PDF)

Solutions:

  • Extract only the relevant sections
  • Create a summary with key quotes
  • Split into multiple uploads
  • Use OCR to convert to text if upload fails

Problem: "Content Too Large"

Solutions:

  • Compress PDF
  • Extract text and remove images
  • Provide summary with quotes instead of full document
  • Break into multiple parts

Advanced: Creating a "Dorking Research Package"

For significant investigations, create a comprehensive package:

research_package/
├── methodology.md (all dorks used, date, search engines)
├── findings_summary.md (executive summary)
├── detailed_findings/ (each finding with evidence)
│   ├── finding_1.md
│   ├── finding_2.md
│   └── ...
├── source_archive/ (downloaded PDFs, screenshots)
│   ├── source_1.pdf
│   ├── source_2.pdf
│   └── ...
├── verification.md (how findings were cross-checked)
└── claude_analysis_request.md (what you want Claude to do)

Then share with Claude:

  • Upload multiple files
  • Reference the methodology document
  • Ask Claude to synthesize across all uploads

Why This Matters for Claude's Usefulness

By providing dorking results, you:

  1. Extend Claude's Knowledge – Give it access to post-cutoff information
  2. Improve Accuracy – Provide verified sources vs. generic search results
  3. Enable Specialized Research – Leverage deep web/academic database content
  4. Increase Relevance – Target exactly what's needed, not surface-level results
  5. Create Verifiable Citations – Documents are traceable and checkable
  6. Accelerate Analysis – Claude can focus on synthesis rather than searching

This transforms Claude from a knowledge-retrieval tool into a powerful research synthesis and analysis engine for information you've expertly located.


Summary Quick Reference

MethodBest ForSetup TimeToken CostVerification
Direct PasteShort documents, extracts5 minHighModerate
Structured CitationOnline accessible content10 minLowHigh
Data ExportBulk/comparative analysis20 minMediumVery High
File UploadLong/complex documents15 minHighHigh
Research SummaryComprehensive investigations45+ minVery HighExcellent
Web Search CollabReal-time verification10 minMediumHigh

Key Takeaway

You have powerful dorking skills that find information Claude can't. By strategically sharing these results using the methods above, you transform Claude from a general knowledge tool into a specialized research analysis partner that can synthesize and act on information you've expertly located through targeted reconnaissance techniques.

The combination of your dorking precision + Claude's synthesis capability = far more powerful than either alone.