<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Uhiyama Lab]]></title><description><![CDATA[A creator's development journal]]></description><link>https://uhiyama-lab.com</link><generator>GatsbyJS</generator><lastBuildDate>Sat, 04 Apr 2026 09:15:33 GMT</lastBuildDate><atom:link href="https://uhiyama-lab.com/feed/blog-en.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><item><title><![CDATA[Automated Multilingual YouTube Dubbing with ElevenLabs API, Python & ffmpeg]]></title><description><![CDATA[A step-by-step guide to automatically generating multilingual dubbed audio for YouTube videos using ElevenLabs API, Python, ffmpeg, and Claude. Covers transcription, translation, TTS generation, timeline construction, and mastering.]]></description><link>https://uhiyama-lab.com/en/blog/video-edit/elevenlabs-youtube-dubbing-workflow/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/video-edit/elevenlabs-youtube-dubbing-workflow/</guid><category><![CDATA[audio]]></category><pubDate>Sun, 29 Mar 2026 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="/c02a03441f57ded3fb6187336bca472d/20260329_ElevenLabs_thumb.png" alt=""></p>
<p>Want to reach international audiences with your YouTube videos? Adding <strong>dubbed audio</strong> — not just subtitles — dramatically improves the viewing experience. Previously, this required hiring professional narrators or speaking foreign languages yourself, but ElevenLabs' TTS API has changed the game.</p>
<p>This article walks through my <strong>complete workflow for automatically generating multilingual dubbed audio for YouTube videos</strong>. The tools are Claude, Python, ffmpeg, and ElevenLabs API. Claude handles the core text processing, while Python + ffmpeg automate audio generation and processing. No special equipment needed — individual creators can get started right away.</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#workflow-overview">Workflow Overview</a></li>
<li><a href="#prerequisites">Prerequisites</a>
<ul>
<li><a href="#required-tools">Required Tools &#x26; Accounts</a></li>
<li><a href="#api-key-setup">Getting Your ElevenLabs API Key</a></li>
</ul>
</li>
<li><a href="#claude-role">Why Claude (LLM) Is Essential</a>
<ul>
<li><a href="#python-limitation">Why Python Alone Isn't Enough</a></li>
<li><a href="#why-claude">Why I Recommend Claude</a></li>
</ul>
</li>
<li><a href="#step1-whisper">Step 1: Create Subtitles (Text)</a>
<ul>
<li><a href="#transcription">Transcription Methods</a></li>
<li><a href="#preprocessing">Preprocessing: Python + Claude Two-Stage Approach</a></li>
</ul>
</li>
<li><a href="#step2-translation">Step 2: Create Translated Subtitles with Claude &#x26; Optimize for TTS</a>
<ul>
<li><a href="#translation-tips">Translation Tips</a></li>
<li><a href="#tts-optimization">TTS Optimization Rules</a></li>
</ul>
</li>
<li><a href="#step3-tts">Step 3: Generate Audio with ElevenLabs API</a>
<ul>
<li><a href="#api-basics">API Basics</a></li>
<li><a href="#python-script">Python Script Implementation</a></li>
<li><a href="#cache-and-retry">Caching &#x26; Retry Strategy</a></li>
</ul>
</li>
<li><a href="#step4-timeline">Step 4: Build Timeline with ffmpeg</a>
<ul>
<li><a href="#speed-adjustment">Speed Adjustment (atempo)</a></li>
<li><a href="#timeline-placement">Silence Base + adelay for Absolute Timestamp Placement</a></li>
<li><a href="#mixdown">Mixdown with amix</a></li>
</ul>
</li>
<li><a href="#step5-mastering">Step 5: Mastering &#x26; Quality Verification</a>
<ul>
<li><a href="#gain-limiter">Gain Adjustment &#x26; Peak Limiter</a></li>
<li><a href="#quality-check">Quality Check via Report</a></li>
</ul>
</li>
<li><a href="#step6-premiere">Step 6: Combine with BGM &#x26; SFX in Premiere Pro</a></li>
<li><a href="#step7-upload">Step 7: Upload to YouTube Studio</a></li>
<li><a href="#cost-estimation">Cost &#x26; Usage Estimates</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Workflow Overview {#workflow-overview}</h2>
<p>This workflow consists of 7 steps.</p>
<pre><code>Video file
  ↓ Step 1: Transcription + Python preprocessing + Claude refinement
Japanese text
  ↓ Step 2: Claude translation + TTS optimization
Dubbing subtitles (_en_dub.srt)
  ↓ Step 3: ElevenLabs TTS API
Per-cue audio files (.mp3)
  ↓ Step 4: ffmpeg timeline construction (★absolute timestamp placement, not simple concatenation)
Timeline audio (.wav)
  ↓ Step 5: Mastering
Dubbed audio (.mp3)
  ↓ Step 6: Premiere Pro — mute original audio + combine with BGM/SFX
Final dubbed audio (.mp3)
  ↓ Step 7: Upload
YouTube Studio (dubbed audio track)
</code></pre>
<p>Here are the tools used at each step.</p>
<table>
<thead>
<tr>
<th>Step</th>
<th>Tools</th>
<th>Role</th>
</tr>
</thead>
<tbody>
<tr>
<td>Step 1</td>
<td>Premiere Pro / Whisper + Python + <strong>Claude</strong></td>
<td>Transcription, preprocessing, text refinement</td>
</tr>
<tr>
<td>Step 2</td>
<td><strong>Claude</strong></td>
<td>Translation, dubbing subtitle creation, TTS optimization</td>
</tr>
<tr>
<td>Step 3</td>
<td>Python + ElevenLabs API</td>
<td>TTS audio generation</td>
</tr>
<tr>
<td>Step 4</td>
<td>ffmpeg</td>
<td>Speed adjustment, timeline placement</td>
</tr>
<tr>
<td>Step 5</td>
<td>ffmpeg</td>
<td>Gain adjustment, limiter, MP3 output</td>
</tr>
<tr>
<td>Step 6</td>
<td>Premiere Pro</td>
<td>Mute original audio, BGM/SFX mixing</td>
</tr>
<tr>
<td>Step 7</td>
<td>YouTube Studio</td>
<td>Dubbed audio registration</td>
</tr>
</tbody>
</table>
<p>The <strong>text processing phase</strong> in Steps 1-2 requires the most intellectual effort, and Claude (LLM) is essential here. Steps 3-5 are automated processing with scripts and ffmpeg.</p>
<h2>Prerequisites {#prerequisites}</h2>
<h3>Required Tools &#x26; Accounts {#required-tools}</h3>
<p>Prepare the following tools and accounts.</p>
<ul>
<li><strong>Claude</strong> (Anthropic) — core tool for translation and text optimization</li>
<li><strong>Python 3.10+</strong></li>
<li><strong>ffmpeg / ffprobe</strong> (available in PATH)</li>
<li><strong>ElevenLabs account</strong> (Starter Plan or above)</li>
<li><strong>Premiere Pro</strong> (for transcription; Whisper is an alternative)</li>
<li><strong>Python libraries</strong>: <code>requests</code>, <code>python-dotenv</code> (plus <code>faster-whisper</code> if using Whisper)</li>
</ul>
<pre><code class="language-bash">pip install requests python-dotenv faster-whisper
</code></pre>
<h3>Getting Your ElevenLabs API Key {#api-key-setup}</h3>
<ol>
<li>Create an account at <a href="https://elevenlabs.io/">ElevenLabs</a></li>
<li>Copy your API key from the <strong>Profile + API Key</strong> section in the dashboard</li>
<li>Go to "Voices" → "Explore" in the sidebar, filter by language and category (Narration, Conversational, etc.) to find a voice you like</li>
<li>Copy the Voice ID from the voice detail page</li>
</ol>
<p><img src="/86d0d28e43d455bc74aadd8d1abbb35a/20260329_ElevenLabs_VoiceID.png" alt="ElevenLabs voice exploration page. Filter by language and category to find the right voice for dubbing."></p>
<p>Create a <code>.env</code> file in your project root with your API key and Voice IDs.</p>
<pre><code class="language-bash">ELEVENLABS_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxx
ELEVENLABS_VOICE_ID_EN=UgBBYS2sOqTuMpoF3BR0
ELEVENLABS_VOICE_ID_KR=PDoCXqBQFGsvfO0hNkEs
</code></pre>
<p>The Voice IDs above are the voices I actually use. I selected them after auditioning voices from ElevenLabs' voice library.</p>
<p><img src="/0c54ac22683370025c03f2f59956ee91/20260329_ElevenLabs_favoriteVoice.png" alt="The English voice (Mark - Natural Conversations) and Korean voice (Chris - Warm and Clear) I use."></p>
<p>Voice IDs aren't secret — they're IDs of publicly available voices in ElevenLabs' library, so feel free to use them directly. Of course, finding voices that match your own channel's tone is recommended.</p>
<p>You can also check available voices via the ElevenLabs API <code>/voices</code> endpoint.</p>
<pre><code class="language-python">import requests, os
from dotenv import load_dotenv

load_dotenv()
api_key = os.getenv("ELEVENLABS_API_KEY")

resp = requests.get(
    "https://api.elevenlabs.io/v1/voices",
    headers={"xi-api-key": api_key},
)
for v in resp.json()["voices"]:
    print(f"{v['voice_id']}  {v['name']}  ({v['category']})")
</code></pre>
<h2>Why Claude (LLM) Is Essential {#claude-role}</h2>
<p>This workflow relies not only on Python + ffmpeg automation, but on <strong>Claude (LLM) playing a central role</strong>.</p>
<h3>Why Python Alone Isn't Enough {#python-limitation}</h3>
<p>Preprocessing transcriptions and creating translated subtitles require <strong>context-aware judgment</strong>. Python scripts can only handle formulaic processing like filler removal and string replacement. The following tasks are beyond Python's capabilities:</p>
<ul>
<li><strong>Context-based text refinement</strong>: Restructuring spoken language into readable text while understanding the speaker's intent</li>
<li><strong>Translation</strong>: Natural multilingual translation that preserves nuance</li>
<li><strong>TTS optimization</strong>: Compressing text to fit within specified time windows while preserving meaning</li>
<li><strong>Contextual proper noun judgment</strong>: Cases where the same sound maps to different correct spellings depending on context</li>
</ul>
<p>In other words, this workflow is built on a <strong>two-stage approach: "parts Python can handle mechanically" and "parts that require LLM understanding"</strong>.</p>
<h3>Why I Recommend Claude {#why-claude}</h3>
<p>Translation and dubbing subtitle creation can also be done with ChatGPT (GPT-5.4). However, I find that <strong>Claude is clearly superior when it comes to creating dubbing subtitles (TTS optimization)</strong>.</p>
<p>Creating dubbing subtitles involves a massive number of constrained instructions like "compress this sentence to fit within a 3.5-second window, keeping it under 15 words while preserving meaning." Claude excels at this kind of <strong>deft text compression to fit specified time windows</strong>. It doesn't just shorten sentences — it consistently produces output that considers naturalness when read aloud by TTS (breath points, contraction usage, list compression).</p>
<p>When building this workflow, I actually created dubbing subtitles with Claude and optimized 181 cues down to 146 cues (19.3% reduction). Claude handled the delicate task of keeping every cue's speed ratio (text length / display time) within 1.3x with high precision.</p>
<h2>Step 1: Create Subtitles (Text) {#step1-whisper}</h2>
<h3>Transcription Methods {#transcription}</h3>
<p>First, prepare the Japanese text that serves as the starting point for dubbing. There are two main methods.</p>
<h4>Method 1: Premiere Pro's Transcription (Recommended)</h4>
<p>This is the method I actually use. Premiere Pro has a built-in <strong>Speech to Text</strong> feature that delivers high-accuracy transcription with video context awareness. Simply run "Transcribe" from Premiere Pro's caption panel and <strong>export the result as a text (.txt) file</strong>.</p>
<p>Premiere Pro's transcription also analyzes video visual information, making it feel more accurate for proper nouns and technical terms compared to Whisper alone. If you're already editing in Premiere Pro, it works without any additional setup — a major advantage.</p>
<p>Note that Premiere Pro also has a filler word auto-removal feature ("um," "uh," etc.), but I recommend <strong>exporting without applying it</strong>. Applying filler removal over-fragments the timecodes, which can degrade Claude's context comprehension accuracy during the meaning-based refinement that follows. Letting Claude handle filler removal while considering context produces higher quality text.</p>
<h4>Method 2: Using Whisper (faster-whisper)</h4>
<p>For environments without Premiere Pro, <a href="https://github.com/SYSTRAN/faster-whisper">faster-whisper</a> is a powerful alternative. With a GPU, the <code>large-v3-turbo</code> model provides fast, high-accuracy transcription.</p>
<pre><code class="language-bash">python transcribe.py "input_video.mp4" "output_dir/" \
    --model large-v3-turbo \
    --language ja \
    --beam-size 5
</code></pre>
<pre><code class="language-python">from faster_whisper import WhisperModel

model = WhisperModel("large-v3-turbo", device="cuda", compute_type="float16")
segments, info = model.transcribe(
    "input_video.mp4",
    language="ja",
    beam_size=5,
    vad_filter=True,
    vad_parameters={"min_silence_duration_ms": 500},
    word_timestamps=True,
    temperature=0.0,
)
</code></pre>
<p><code>vad_filter=True</code> auto-detects silent sections, improving segment splitting. Save the output in SRT format.</p>
<p>Either method leads to the same preprocessing and translation steps that follow.</p>
<h3>Preprocessing: Python + Claude Two-Stage Approach {#preprocessing}</h3>
<p>Raw transcription output isn't ready for translation. Preprocessing uses a <strong>two-stage approach: mechanical processing with Python</strong> and <strong>meaning-based refinement with Claude</strong>.</p>
<h4>Stage 1: Mechanical Preprocessing with Python</h4>
<p>The following three tasks can be automated with Python scripts.</p>
<p><strong>Filler removal</strong>: Remove segments entirely composed of filler words like "um," "uh," "well."</p>
<pre><code class="language-python">FILLER_ONLY = {
    "はい", "はい。", "えー", "えーと", "えっと",
    "うーん", "うん", "あの", "あー", "おー",
    "そう", "そうです", "そうですね", "ね", "で",
}

if segment["text"].strip() in FILLER_ONLY:
    continue  # Skip this segment entirely
</code></pre>
<p>Leading fillers ("um, so next..." → remove the "um,") are also removed with regex.</p>
<p><strong>Mechanical proper noun replacement</strong>: Whisper frequently misrecognizes technical terms and proper nouns. Prepare a "misrecognition → correct spelling" replacement list in advance.</p>
<pre><code class="language-python"># Example channel-specific replacement list
REPLACEMENTS = [
    ("新学校", "進学校"),      # Homophone confusion
    ("元受け", "元請け"),      # Kanji error
    ("下受け", "下請け"),      # Kanji error
    ("MisrecognizedNameA", "CorrectChannelName"),  # Proper noun
]

for old, new in REPLACEMENTS:
    text = text.replace(old, new)
</code></pre>
<p>You don't need to build this replacement list perfectly from the start. As you create subtitles for more videos and find new misrecognition patterns, just tell Claude to "add this misrecognition to the lookup table." The more videos you process, the more comprehensive the list becomes, and preprocessing accuracy improves over time.</p>
<p><strong>Segment merging</strong>: Merge segments that are too short with adjacent ones.</p>
<pre><code class="language-python">def should_merge(current, next_seg, max_gap=0.8, max_duration=16.0, max_chars=160):
    gap = next_seg["start"] - current["end"]
    if gap > max_gap:       # More than 0.8s gap = separate segments
        return False
    if next_seg["end"] - current["start"] > max_duration:  # Over 16s after merge
        return False
    if len(current["text"] + next_seg["text"]) > max_chars:  # Over 160 chars after merge
        return False
    return True
</code></pre>
<h4>Stage 2: Meaning-Based Refinement with Claude</h4>
<p>Even after Python's mechanical processing, the transcription is still "spoken language written down." Let's look at an example.</p>
<p><strong>Whisper output (after Python preprocessing)</strong>:</p>
<pre><code>えーとですね今日はまあ設定周りの話をちょっとしたいなと思ってまして
まあ結構つまずくポイントっていうかですねそのへんをまとめていきます
</code></pre>
<p><strong>After Claude refinement</strong>:</p>
<pre><code>今回は設定周りでつまずきやすいポイントをまとめて解説します。
</code></pre>
<p>What's happening here isn't just "typo correction."</p>
<ul>
<li>"えーとですね," "まあ," "ちょっと," "っていうかですね" → Remove conversational filler expressions</li>
<li>Condense content spanning 2 sentences into 1</li>
<li>Reconstruct into clear text that's easy to translate, capturing the speaker's intent</li>
</ul>
<p>Here's another example from the beginning of a livestream.</p>
<p><strong>Whisper output</strong>:</p>
<pre><code>はいみなさんこんにちは
今日はちょっと天気が良くてですね散歩がてら来たんですけど
まあそれはさておきですね今日の本題なんですが
</code></pre>
<p><strong>After Claude refinement</strong>:</p>
<pre><code>(Greeting and small talk removed; starts from the main topic)
</code></pre>
<p>Livestream videos often have several minutes of greetings and small talk at the beginning, but dubbed audio only needs content directly related to the topic. The judgment of "where the main topic begins" is something Python can't make — it's a <strong>process only possible with Claude's contextual understanding</strong>.</p>
<p>This two-stage approach — Stage 1 (Python) mechanically removing noise, Stage 2 (Claude) reconstructing spoken language into natural text — is critical to translation quality.</p>
<p>:::ad</p>
<h2>Step 2: Create Translated Subtitles with Claude &#x26; Optimize for TTS {#step2-translation}</h2>
<h3>Translation Tips {#translation-tips}</h3>
<p>Pass the preprocessed Japanese text to Claude for translation into target languages. It's efficient to instruct TTS optimization simultaneously with translation.</p>
<p>When giving Claude translation instructions, specify these points explicitly.</p>
<p><strong>All languages</strong>:</p>
<ul>
<li>Translate as <strong>natural spoken language</strong>. Unlike display subtitles, this will be read aloud by TTS, so avoid written-language expressions</li>
<li>Keep text within each cue short enough for TTS to finish reading within the display time window</li>
</ul>
<p><strong>For English</strong>:</p>
<ul>
<li>Use contractions actively ("can not" → "can't", "do not" → "don't")</li>
<li>Cut verbose expressions ("Furthermore" → "Also", "In order to" → "To")</li>
</ul>
<p><strong>For Korean</strong>:</p>
<ul>
<li><strong>Use 해요체 (haeyo-che) consistently</strong>. Korean has multiple speech levels with varying formality. 해요체 is the casual-polite register equivalent to Japanese "〜です・〜ます," and sounds most natural for YouTube's conversational tone. In contrast, 합니다체 (hamnida-che) is the formal register used in news and official documents, and sounds stiff and unnatural when read by TTS</li>
<li>Break up long Sino-Korean compound words that TTS reads in one breath into natural rhythm (e.g., "충분조건" → "충분한 조건")</li>
</ul>
<p>Let's look at an actual conversion example.</p>
<p><strong>Japanese (Claude-refined)</strong>:</p>
<pre><code>興味があるとか好きだなと思うこと以外、人ってそもそも長い時間はできないじゃないですか。
</code></pre>
<p><strong>English dubbing subtitle (<code>_en_dub.srt</code>, created by Claude)</strong>:</p>
<pre><code>Other than things you're interested in or things you like,
you can't spend long hours on them, right?
</code></pre>
<p><strong>Korean dubbing subtitle (<code>_kr_dub.srt</code>, created by Claude)</strong>:</p>
<pre><code>흥미가 있거나, 좋아하지 않으면
오랜 시간을 들일 수 없잖아요
</code></pre>
<p>Claude appropriately converts the Japanese colloquial nuance ("〜じゃないですか" — seeking agreement) to "right?" in English and "〜잖아요" (haeyo-che agreement expression) in Korean. This kind of <strong>nuance-level translation accuracy</strong> is why I use Claude instead of machine translation.</p>
<h3>TTS Optimization Rules {#tts-optimization}</h3>
<p>Create "dubbing subtitles" optimized for TTS from the translated subtitles. Name files with language code + <code>_dub</code> like <code>video_name_en_dub.srt</code> to distinguish them from display subtitles.</p>
<p><strong>Core Rule: 1 cue = 1 complete sentence</strong></p>
<p>In regular subtitles, one sentence may span multiple cues, but in TTS dubbing subtitles, <strong>each cue contains one complete sentence</strong>. TTS engines read per-cue, so cutting mid-sentence produces unnatural intonation.</p>
<p>Let's see a concrete example. If the original Japanese is "基礎練習を毎日続けることが上達の近道ですが、ただ量をこなすだけでは効率が悪い" (Practicing basics daily is the fastest way to improve, but just doing a lot isn't efficient):</p>
<p><strong>Regular display subtitle (_en.srt)</strong> — short segments for on-screen display:</p>
<pre><code>1
00:00:03,200 --> 00:00:05,800
Practicing the basics every day

2
00:00:05,800 --> 00:00:08,500
is the fastest way to improve,

3
00:00:08,500 --> 00:00:11,200
but just doing a lot of practice
isn't very efficient.
</code></pre>
<p><strong>TTS dubbing subtitle (_en_dub.srt)</strong> — 1 complete sentence per cue:</p>
<pre><code>1
00:00:03,200 --> 00:00:11,200
Practicing basics daily is the fastest way to improve, but just doing a lot isn't efficient.
</code></pre>
<p>The display subtitle split across 3 cues is combined into 1 cue in the dubbing subtitle. TTS reads this single sentence in one breath starting at <code>00:00:03,200</code>, producing natural intonation.</p>
<p>Here are more text shortening examples.</p>
<table>
<thead>
<tr>
<th>Before (direct translation)</th>
<th>After (TTS-optimized)</th>
<th>Shortening point</th>
</tr>
</thead>
<tbody>
<tr>
<td>It is important to understand that...</td>
<td>You need to understand that...</td>
<td>Cut verbose opener</td>
</tr>
<tr>
<td>In order to achieve this goal</td>
<td>To achieve this</td>
<td>Compress prepositional phrase</td>
</tr>
<tr>
<td>Furthermore, you should also consider</td>
<td>Also, consider</td>
<td>Reduce conjunction and filler</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Rule</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr>
<td>Words per cue</td>
<td>English: 15-20 words, Korean: 25-35 characters</td>
</tr>
<tr>
<td>Ultra-short cues</td>
<td>Merge cues under 1 second with adjacent ones</td>
</tr>
<tr>
<td>Breath points</td>
<td>Insert commas in continuous text over 20 words</td>
</tr>
<tr>
<td>Numbers</td>
<td>Spell out in Hangul for Korean (TTS number reading is unstable)</td>
</tr>
</tbody>
</table>
<p><strong>Speed estimation guide (English)</strong></p>
<pre><code>1 word ≈ 120-150ms
2-second window → max 13-16 words
3-second window → max 20-25 words
4-second window → max 27-33 words
</code></pre>
<p>If text is too long for the cue's display time, speed adjustment will exceed its limit and fail. Use the guide above to calibrate text length.</p>
<h2>Step 3: Generate Audio with ElevenLabs API {#step3-tts}</h2>
<p>This is the core of the workflow. Send each cue from the dubbing SRT to ElevenLabs API and retrieve audio files.</p>
<h3>API Basics {#api-basics}</h3>
<pre><code>Endpoint: POST https://api.elevenlabs.io/v1/text-to-speech/{voice_id}
Auth header: xi-api-key: &#x3C;your-api-key>
Response: audio/mpeg (MP3 binary)
</code></pre>
<p>Request body:</p>
<pre><code class="language-json">{
  "text": "The text to synthesize.",
  "model_id": "eleven_flash_v2_5",
  "voice_settings": {
    "stability": 0.5,
    "similarity_boost": 0.75
  }
}
</code></pre>
<p><code>stability</code> controls voice consistency (higher = more stable, lower = more expressive), <code>similarity_boost</code> controls fidelity to the selected voice. The balance above is recommended for dubbing.</p>
<h3>Python Script Implementation {#python-script}</h3>
<p>Here's the overall processing flow.</p>
<pre><code class="language-python">import requests, hashlib, json, re, subprocess
from pathlib import Path
from dotenv import load_dotenv

API_BASE = "https://api.elevenlabs.io/v1"

def tts_generate(api_key, voice_id, text, output_path,
                 model_id="eleven_flash_v2_5", max_retries=3):
    """Generate MP3 audio from a single text"""
    url = f"{API_BASE}/text-to-speech/{voice_id}"
    headers = {
        "xi-api-key": api_key,
        "Content-Type": "application/json",
    }
    body = {
        "text": text,
        "model_id": model_id,
        "voice_settings": {"stability": 0.5, "similarity_boost": 0.75},
    }

    for attempt in range(max_retries):
        resp = requests.post(url, headers=headers, json=body,
                             stream=True, timeout=60)
        if resp.status_code == 200:
            with open(output_path, "wb") as f:
                for chunk in resp.iter_content(chunk_size=4096):
                    f.write(chunk)
            return
        if resp.status_code in (429, 500, 502, 503):
            wait = 2 ** (attempt + 1)
            time.sleep(wait)
            continue
        resp.raise_for_status()
    raise RuntimeError(f"TTS failed after {max_retries} retries")
</code></pre>
<p>SRT parsing is also implemented from scratch using regex to extract cue numbers, timestamps, and text.</p>
<pre><code class="language-python">SRT_BLOCK_RE = re.compile(
    r"(?ms)^\s*(\d+)\s*\n"
    r"(\d{2}:\d{2}:\d{2},\d{3}) --> (\d{2}:\d{2}:\d{2},\d{3})\s*\n"
    r"(.*?)(?=\n{2,}|\Z)"
)

def parse_ts(value):
    """Convert SRT timestamp to milliseconds"""
    hh, mm, rest = value.split(":")
    ss, ms = rest.split(",")
    return ((int(hh) * 60 + int(mm)) * 60 + int(ss)) * 1000 + int(ms)

def parse_srt(path):
    text = Path(path).read_text(encoding="utf-8-sig")
    cues = []
    for m in SRT_BLOCK_RE.finditer(text):
        cues.append({
            "index": int(m.group(1)),
            "start_ms": parse_ts(m.group(2)),
            "end_ms": parse_ts(m.group(3)),
            "text": m.group(4).strip(),
        })
    return cues
</code></pre>
<p>Call <code>tts_generate()</code> for each cue to retrieve individual MP3 files.</p>
<h3>Caching &#x26; Retry Strategy {#cache-and-retry}</h3>
<p>To minimize API costs, cache generated audio. Generate a SHA1 hash from text, Voice ID, and model ID as the cache key.</p>
<pre><code class="language-python">def build_cache_key(voice_id, model_id, text):
    payload = f"{voice_id}\n{model_id}\n{text}".encode("utf-8")
    return hashlib.sha1(payload).hexdigest()

cache_dir = Path("_tts_cache")
cache_dir.mkdir(exist_ok=True)

key = build_cache_key(voice_id, model_id, cue["text"])
cached = cache_dir / f"{key}.mp3"

if cached.exists():
    # Cache hit → skip API call
    shutil.copy(cached, output_path)
else:
    tts_generate(api_key, voice_id, cue["text"], cached, model_id)
    shutil.copy(cached, output_path)
</code></pre>
<p>Modifying text changes the hash, automatically invalidating old cache. Cache from test runs (generating only the first 5 cues) is reused in production runs, eliminating wasted credits.</p>
<h2>Step 4: Build Timeline with ffmpeg {#step4-timeline}</h2>
<p>Once per-cue MP3s are ready, combine them into a single timeline audio with ffmpeg.</p>
<h3>Speed Adjustment (atempo) {#speed-adjustment}</h3>
<p>When TTS audio is longer than the cue's display time, use the <code>atempo</code> filter to adjust playback speed.</p>
<pre><code class="language-python">def adjust_speed(input_path, output_path, tempo):
    """Adjust WAV audio playback speed by tempo factor"""
    subprocess.run([
        "ffmpeg", "-y", "-hide_banner", "-loglevel", "error",
        "-i", str(input_path),
        "-filter:a", f"atempo={tempo:.4f}",
        "-c:a", "pcm_s16le", "-ar", "48000", "-ac", "2",
        str(output_path),
    ], check=True)
</code></pre>
<p>Speed ratio decision criteria:</p>
<table>
<thead>
<tr>
<th>ratio</th>
<th>Decision</th>
</tr>
</thead>
<tbody>
<tr>
<td>≤ 1.0</td>
<td>No adjustment needed (audio shorter than cue window)</td>
</tr>
<tr>
<td>1.0–1.3</td>
<td>Speed up with atempo</td>
</tr>
<tr>
<td>> 1.3</td>
<td>Text too long → fix the dubbing subtitle</td>
</tr>
</tbody>
</table>
<p>English can tolerate up to 1.5x, but speech starts sounding rushed above 1.3x.</p>
<h3>Silence Base + adelay for Absolute Timestamp Placement {#timeline-placement}</h3>
<p>This is the <strong>key to timeline construction</strong>. YouTube dubbed audio must match the original video's length exactly. Simply concatenating TTS audio front-to-back eliminates silent gaps between cues, making the result shorter than the original video in most cases — causing the upload to be rejected.</p>
<p>Instead, create a silence WAV matching the full video duration as a base, then <strong>place each cue's audio at absolute positions based on SRT start times</strong>.</p>
<pre><code class="language-python"># Generate silence WAV matching full video duration
subprocess.run([
    "ffmpeg", "-y", "-hide_banner", "-loglevel", "error",
    "-f", "lavfi", "-i", "anullsrc=r=48000:cl=stereo",
    "-t", f"{total_duration_sec:.3f}",
    "-c:a", "pcm_s16le", "silence.wav",
], check=True)
</code></pre>
<p>Apply <code>adelay</code> filter to each cue's audio, placing it at the SRT start time (in milliseconds).</p>
<pre><code># ffmpeg filter graph example
[1:a]adelay=3200|3200[d0]    # Place cue1 at 3.2s
[2:a]adelay=8500|8500[d1]    # Place cue2 at 8.5s
[3:a]adelay=15000|15000[d2]  # Place cue3 at 15.0s
</code></pre>
<p>This approach ensures that even if a cue fails, subsequent timing isn't affected.</p>
<h3>Mixdown with amix {#mixdown}</h3>
<p>Finally, combine the silence base with all cues using the <code>amix</code> filter.</p>
<pre><code>[0:a][d0][d1][d2]amix=inputs=4:duration=first:dropout_transition=0:normalize=0[out]
</code></pre>
<p><strong>Important</strong>: Always specify <code>normalize=0</code>. With the default <code>normalize=1</code>, each input's volume gets normalized to <code>1/N</code>, which with many cues (e.g., 150) results in near-silence.</p>
<p>For large cue counts, batch-process 30 cues at a time with staged mixing. Since filter graphs get long, pass them via <code>-filter_complex_script</code> through a file for safety.</p>
<p>:::ad</p>
<h2>Step 5: Mastering &#x26; Quality Verification {#step5-mastering}</h2>
<h3>Gain Adjustment &#x26; Peak Limiter {#gain-limiter}</h3>
<p>Apply gain and limiter to the timeline audio and output the final MP3.</p>
<pre><code class="language-python">def apply_mastering(input_path, output_path, gain_db=7.0, limit_db=-1.0):
    limit_linear = 10 ** (limit_db / 20)  # Convert dBFS to linear
    subprocess.run([
        "ffmpeg", "-y", "-hide_banner", "-loglevel", "warning",
        "-i", str(input_path),
        "-filter:a", f"volume={gain_db:.2f}dB,alimiter=limit={limit_linear:.4f}",
        "-ar", "48000", "-ac", "2",
        "-c:a", "libmp3lame", "-b:a", "192k",
        str(output_path),
    ], check=True)
</code></pre>
<ul>
<li><code>volume=7.0dB</code>: TTS audio tends to be quieter than original video, so boost it</li>
<li><code>alimiter</code>: Prevent clipping by limiting peaks to -1.0 dBFS</li>
</ul>
<h3>Quality Check via Report {#quality-check}</h3>
<p>Verify generation results with a JSON report.</p>
<pre><code class="language-json">{
  "input_cues": 171,
  "ok_cues": 170,
  "silenced_cues": 0,
  "failed_cues": 0,
  "skipped_empty_cues": 1,
  "cache_hits": 5,
  "duration_ok": true,
  "max_ratio": 1.28
}
</code></pre>
<p>Three key points to check:</p>
<ul>
<li><strong>failed_cues = 0</strong>: All cues generated successfully</li>
<li><strong>silenced_cues = 0</strong>: No cues filled with silence</li>
<li><strong>duration_ok = true</strong>: Output MP3 length matches the original video</li>
</ul>
<p>As a final check, verify no decode errors with ffmpeg.</p>
<pre><code class="language-bash">ffmpeg -v error -xerror -i output_dub.mp3 -f null NUL
# exit code 0 means no issues
</code></pre>
<h2>Step 6: Combine with BGM &#x26; SFX in Premiere Pro {#step6-premiere}</h2>
<p>The MP3 from Step 5 contains only TTS audio — no BGM (background music) or SFX (sound effects). Here we leverage Premiere Pro's edited timeline.</p>
<p>By adding dubbed audio tracks to the original video's editing project, you can export both the regular video and dubbed audio from the same timeline.</p>
<p><img src="/2ecaaf0838542943780e6d9f08f30cc3/20260329_premierePro_dubAudio.png" alt="Premiere Pro timeline with 4 tracks: A1 (main audio/Japanese), A2 (BGM), A3 (English audio), A4 (Korean audio)"></p>
<p>The export procedure is as follows.</p>
<ol>
<li><strong>Japanese video export</strong>: Enable main audio (A1) + BGM (A2) and export as video (.mp4) normally</li>
<li><strong>English dubbed audio export</strong>: Mute main audio (A1), export audio-only (.mp3) with English audio (A3) + BGM (A2)</li>
<li><strong>Korean dubbed audio export</strong>: Similarly export with Korean audio (A4) + BGM (A2)</li>
</ol>
<p>The final output is these 3 files.</p>
<ul>
<li><code>video_name.mp4</code> — Video with Japanese audio (upload to YouTube)</li>
<li><code>video_name_en_dub.mp3</code> — English dubbed audio (includes BGM+SFX)</li>
<li><code>video_name_kr_dub.mp3</code> — Korean dubbed audio (includes BGM+SFX)</li>
</ul>
<p>With this approach, you only need to toggle track mutes to export each language's audio, so adding more languages barely increases the workload. When viewers switch languages, they get a natural experience where "only the voice changes, BGM and SFX stay the same."</p>
<p>Furthermore, since the original video and all language dubbed audio tracks are on the same timeline, you can also export Shorts with dubbed audio included. No need to prepare separate audio for Shorts — supporting multilingual Shorts at scale is a major advantage.</p>
<h2>Step 7: Upload to YouTube Studio {#step7-upload}</h2>
<!-- TODO: Add YouTube Studio screenshot -->
<ol>
<li>Open the target video's edit page in YouTube Studio</li>
<li>Add the target language from the "Subtitles" tab</li>
<li>Select "Add dub"</li>
<li>Upload the final MP3 created in Step 6</li>
</ol>
<p>YouTube dubbed audio <strong>must exactly match the original video's length</strong> or the upload will be rejected. This is why Step 4 uses the timeline placement approach (placing each cue at its original timecode position with ffmpeg instead of simple concatenation). This design ensures the total duration always matches the original video, regardless of whether individual cues run slightly long or short.</p>
<p>Viewers can switch audio languages from the video settings menu (gear icon).</p>
<h2>Cost &#x26; Usage Estimates {#cost-estimation}</h2>
<p>ElevenLabs' <code>eleven_flash_v2_5</code> model costs <strong>0.5 credits per character</strong>.</p>
<h3>Actual Cost (11-Minute Video)</h3>
<p>Here are the actual credits consumed when dubbing an 11-minute video into English and Korean.</p>
<table>
<thead>
<tr>
<th>Language</th>
<th>Credits consumed</th>
</tr>
</thead>
<tbody>
<tr>
<td>English</td>
<td>4,452</td>
</tr>
<tr>
<td>Korean</td>
<td>2,199</td>
</tr>
</tbody>
</table>
<p>ElevenLabs uses <strong>per-character billing</strong>, so languages with higher information density per character are more cost-effective.</p>
<table>
<thead>
<tr>
<th>Language</th>
<th>Cost relative to English (100%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Chinese</td>
<td>35–40%</td>
</tr>
<tr>
<td>Korean</td>
<td>~45%</td>
</tr>
<tr>
<td>Japanese</td>
<td>50–55%</td>
</tr>
<tr>
<td>English</td>
<td>100% (baseline)</td>
</tr>
<tr>
<td>German</td>
<td>110–120%</td>
</tr>
</tbody>
</table>
<p>German tends to be pricier than English due to its longer compound words.</p>
<h3>Cost in Japanese Yen</h3>
<p>At 1 USD = 158 JPY, dubbing an 11-minute video into English + Korean costs:</p>
<table>
<thead>
<tr>
<th>Plan</th>
<th>Monthly</th>
<th>Credits</th>
<th>English</th>
<th>Korean</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>Starter</td>
<td>$5</td>
<td>30,000</td>
<td>¥117</td>
<td>¥58</td>
<td><strong>¥175</strong></td>
</tr>
<tr>
<td>Creator</td>
<td>$22</td>
<td>100,000</td>
<td>¥155</td>
<td>¥76</td>
<td><strong>¥231</strong></td>
</tr>
</tbody>
</table>
<p>Even the Starter Plan covers 2 languages for about ¥175 per 11-minute video. With 30,000 monthly credits, you can process 4–5 videos of similar length per month.</p>
<h3>Cost Reduction Tips</h3>
<p>The key to cost reduction is <strong>reducing text character count</strong>. Using contractions ("do not" → "don't") and cutting verbose expressions ("Furthermore" → "Also") can achieve 5–10% savings. Changing the model or audio format does not affect credit consumption.</p>
<h2>Conclusion {#conclusion}</h2>
<p>This article presented a workflow for automatically generating multilingual dubbed audio using ElevenLabs API combined with Python + ffmpeg.</p>
<p>Here's a recap of the full flow.</p>
<ol>
<li>Generate Japanese text with <strong>Premiere Pro or Whisper</strong>, then preprocess and refine with Python + <strong>Claude</strong></li>
<li>Create TTS-optimized <strong>dubbing subtitles</strong> with <strong>Claude</strong> translation</li>
<li>Generate per-cue MP3 audio with <strong>ElevenLabs API</strong></li>
<li>Build an absolute-timestamp timeline with <strong>ffmpeg</strong>'s <code>adelay</code> + <code>amix</code></li>
<li><strong>Master</strong> with gain adjustment and limiter, output dubbed audio MP3</li>
<li>Mute the original audio in <strong>Premiere Pro</strong> and combine with BGM/SFX to create the final MP3</li>
<li>Upload to <strong>YouTube Studio</strong> as dubbed audio</li>
</ol>
<p>This workflow has two critical elements. First, <strong>Claude's text processing</strong> (Steps 1–2): spoken language refinement, nuance-preserving translation, time-constrained text compression — these all require semantic understanding that Python scripts alone can't provide. Second, <strong>absolute timestamp placement</strong> (Step 4): by placing each cue at its original video timecode position with ffmpeg rather than simple concatenation, the output duration always matches the original video exactly. This is essential since YouTube requires dubbed audio to be the exact same length as the original.</p>
<p>ElevenLabs starts at $5/month and Claude is available on a free plan, making multilingual expansion accessible for individual channels. If you're looking to expand your reach to international audiences, give it a try.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/c02a03441f57ded3fb6187336bca472d/20260329_ElevenLabs_thumb.png" medium="image"/></item><item><title><![CDATA[Building 'VRCFinder': A Custom Tag System for Discovering VRChat Booth Products]]></title><description><![CDATA[When starting VRChat asset creation, I wanted to efficiently research Booth product data, so I built 'VRCFinder' - a web service that organizes and searches products using custom tags. Here's the story behind its development and how it works.]]></description><link>https://uhiyama-lab.com/en/blog/webdev/vrcfinder-development-story/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/webdev/vrcfinder-development-story/</guid><category><![CDATA[vrchat]]></category><category><![CDATA[booth]]></category><pubDate>Sun, 22 Feb 2026 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>When I started creating VRChat assets (such as outfits), I first wanted to get a broad overview of what products were available on Booth. While Booth has a massive collection of VRChat-related products, there was no way to browse them by criteria like "style," "color," or "body type," so gathering information was time-consuming.</p>
<p>That's why I built "VRCFinder" - a web service that assigns custom tags to Booth products for organized searching and browsing. In this article, I'll share the story behind its development and how it works.</p>
<p><img src="/dcfdda89fb99f31c13b0db8865df347b/VRCFinder_Home.jpg" alt="VRCFinder home screen"></p>
<p>:::post-link{url="<a href="https://vrcfinder.net/ja/">https://vrcfinder.net/ja/</a>" text="VRCFinder - VRChat Booth Search Companion"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#background">Why I Built This</a></li>
<li><a href="#overview">VRCFinder Overview</a></li>
<li><a href="#how-it-works">Data Collection and Custom Tags</a></li>
<li><a href="#features">Key Features</a></li>
<li><a href="#technical">Technical Challenges</a></li>
<li><a href="#for-creators">A Research Tool for Creators Too</a></li>
<li><a href="#guidelines">Guideline Compliance</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<p>:::</p>
<h2>Why I Built This {#background}</h2>
<p>VRChat has been growing its user base year after year, becoming increasingly prominent as a metaverse platform. I've been actively participating in tech and academic meetups hosted in VRChat, using them to stay up-to-date on the latest AI technology and development environments.</p>
<p>One of the most vibrant aspects of VRChat culture is "kaihan" (avatar customization). Users take a base avatar and swap outfits, add accessories, modify textures, and create a completely unique look. The creators' marketplace <strong>Booth</strong> is what drives this customization culture. Countless creators sell VRChat avatars, outfits, and accessories on Booth, creating a massive product ecosystem.</p>
<p>However, Booth isn't a VRChat-specific marketplace. It's a general-purpose marketplace for all kinds of products - doujinshi, music, handmade goods, and more. VRChat-related products are just one of many categories.</p>
<p>Because of this, when searching for VRChat outfits, non-VRC products would appear mixed in with tag and category search results. There was no way to filter by VRChat-specific criteria like compatible avatars, aesthetic style, or body type, making it time-consuming to find what you were looking for.</p>
<p>Booth has creators listing new products every day, making it a very active ecosystem. I'd long thought it would be convenient to organize this product data by clothing type, color, and style for easier searching.</p>
<p>Then in October 2025, Booth updated their scraping guidelines. Among their examples of welcomed applications was "applications that collect publicly available information for custom search and recommendation purposes." When I saw this, I knew it was exactly what I wanted to build, and I committed to developing VRCFinder. (Details on guideline compliance are discussed <a href="#guidelines">later</a>.)</p>
<p>:::ad</p>
<h2>VRCFinder Overview {#overview}</h2>
<p>VRCFinder is a free web service that assigns custom tags and keywords to VRChat-related products on Booth, enabling multi-faceted searching and browsing. No account registration is required, and it works on both desktop and mobile.</p>
<p>An important note: VRCFinder is not a real-time search engine. It covers products <strong>released since 2022</strong> with <strong>500 or more likes (favorites)</strong>, with data collected and analyzed periodically. Since the tag assignment analysis takes time for each product, we also limit the year range.</p>
<p>For checking the latest new products, going directly to Booth is the fastest option. What VRCFinder excels at is scenarios like "I want to find swimwear for summer" or "I'm planning a gothic-themed avatar customization - what outfits are out there?" It helps you efficiently find related products by style, outfit type, and other criteria when you have a specific season or customization theme in mind.</p>
<p>:::post-link{url="<a href="https://vrcfinder.net/ja/">https://vrcfinder.net/ja/</a>" text="VRCFinder - VRChat Booth Search Companion"}</p>
<h2>Data Collection and Custom Tags {#how-it-works}</h2>
<h3>Background on Collection Criteria</h3>
<p>The "since 2022, 500+ likes" threshold was set as a practical operational baseline. Making it too strict reduces the number of listed products, while making it too lenient increases the volume too much, slowing down analysis and reducing data update frequency. This threshold represents a balance between coverage and update speed.</p>
<h3>Tag Generation Process</h3>
<p>Custom tags are generated by combining multiple information sources.</p>
<ol>
<li><strong>Text Analysis</strong>: Extracts compatible avatars, outfit types, and distinctive keywords from product titles, registered tags, and descriptions</li>
<li><strong>Image Analysis</strong>: Determines color and style (cute, cool, etc.) from thumbnail images</li>
<li><strong>Manual Curation</strong>: Reviews automated analysis results, corrects misclassifications, and fills in missing tags</li>
</ol>
<h3>From Text Analysis to Image Analysis</h3>
<p>Initially, tags were generated through text analysis alone - extracting outfit types and compatible avatar names from titles and descriptions. While this worked to some extent, there were limits to what text could provide.</p>
<p>Booth product pages include the seller-set title, tags, and description. However, information that can be <strong>gleaned from visual appearance</strong> - like outfit style (cute, cool, etc.), main color, decorative features, and body type impression - is rarely explicitly stated on product pages. Sellers don't typically write "this outfit is pink with a cute aesthetic."</p>
<p>That's where thumbnail image analysis came in. By analyzing images, we can assign tags for outfit type, style, visual characteristics, color, decorations, and body type - objective information that isn't even registered on the Booth product page itself. This enables searches like "gothic outfits" or "cool-toned long coats" based on abstract visual concepts.</p>
<h2>Key Features {#features}</h2>
<h3>Custom Tag and Keyword Search</h3>
<p>Search across products using custom tags and keywords that don't exist on Booth. You can find products using intuitive terms like style tags ("cute," "cool," "gothic," "Japanese-style") and item type tags ("mini skirt," "long coat," "maid outfit," etc.).</p>
<h3>Compatible Model Directory</h3>
<p>Browse products organized by popular VRChat avatar models. Simply select your avatar to efficiently find outfits and accessories designed for it.</p>
<p><img src="/7a4e04ea066484a12969793d062ccee3/VRCFinder_Search.jpg" alt="Compatible model directory"></p>
<h3>Category and Tag Directory</h3>
<p>We provide organized category and tag directory pages. Even if you don't know what tags are available, you can simply tap on interesting tags from the list to browse related products.</p>
<p><img src="/bc38fdf1145d90a0482451b5bf81b9ae/VRCFinder_Category.jpg" alt="Tag directory"></p>
<h3>Style, Body Type, and Color Filters</h3>
<p>Filter by visual aesthetic (style), avatar body type, main color, and other criteria. Search works not only for outfits but also for worlds. This is especially useful when you have a customization theme in mind - for example, if you want a "blue-themed coordination," just select blue in the color filter to find blue outfits and worlds together. Combining multiple criteria helps you narrow down to items that match your vision more closely.</p>
<p><img src="/900958d58e347cdb533b090d8418fbff/VRCFinder_Search_Blue.jpg" alt="Search example with &#x22;Blue&#x22; color selected in World category"></p>
<p>:::ad</p>
<h2>Technical Challenges {#technical}</h2>
<p>Here are some of the real challenges I encountered during development and how I addressed them.</p>
<h3>Obtaining Neutral Product Names</h3>
<p>On Booth, creators sometimes run limited-time sales, and it's common practice to prepend sale text like "[50% OFF]" or "[ON SALE]" to the product name during the promotion.</p>
<p>While this is helpful information for buyers, displaying it as-is in VRCFinder's product list would push the actual product name out of view, frequently breaking the layout. Plus, when the sale ends, the name reverts to normal, making the display inconsistent.</p>
<p>To handle this, I implemented processing to detect and hide sale-related text from the list display.</p>
<h3>Unifying Tag Variations</h3>
<p>Some creators diligently register detailed tags for their products. However, the same concept often appears under different spellings.</p>
<ul>
<li>"Angel ring" and "Halo"</li>
<li>"Gloves" and "Mittens"</li>
<li>"Horns" and "Antlers"</li>
<li>"Modular Avatar" and "ModularAvatar"</li>
<li>"Magic wand" and "Magic staff"</li>
</ul>
<p>Treating these as separate tags would fragment search results too much. It would be a shame if someone searching for "halo" missed products tagged as "angel ring." VRCFinder addresses this with a synonym dictionary that merges equivalent tags.</p>
<h3>Handling Model Name Variations</h3>
<p>The spelling variation issue also affected compatible avatar model listings. For example, the popular model "Lapwing" appears in various katakana spellings across different creators' listings.</p>
<p>This is one of the subtle reasons searching on Booth can be frustrating. Searching for the official name won't find products registered under alternative spellings. VRCFinder creates a variation dictionary for each compatible model to ensure all spelling variants are properly captured.</p>
<h2>A Research Tool for Creators Too {#for-creators}</h2>
<p>VRCFinder originally started as a tool I built for my own asset creation research.</p>
<p>Since custom tags enable cross-cutting product browsing, it doubles as a convenient research tool for creators.</p>
<ul>
<li><strong>Trend Analysis</strong>: See what aesthetic styles are popular for outfits</li>
<li><strong>Competitive Research</strong>: Check how many products exist in a specific category</li>
<li><strong>Niche Discovery</strong>: Find categories with few products</li>
<li><strong>Target Avatar Selection</strong>: Gauge which avatars have the highest demand</li>
</ul>
<p>By focusing on popular products with 500+ likes, it's also useful for understanding design trends and what users favor. As a result, it's become a handy search tool not just for creators but for buyers as well.</p>
<h2>Guideline Compliance {#guidelines}</h2>
<p>VRCFinder's data collection follows Booth's guidelines and their revised scraping guidelines (October 10, 2025), and is limited to publicly available information.</p>
<p>Among Booth's examples of welcomed applications is "applications that collect publicly available information for custom search and recommendation purposes," and VRCFinder operates in line with this policy.</p>
<p>Reference: <a href="https://booth.pm/guidelines">Booth Guidelines</a> / <a href="https://booth.pm/announcements/863">About the Revised Scraping Guidelines (October 10, 2025)</a></p>
<p>:::ad</p>
<h2>Conclusion {#conclusion}</h2>
<p>What started as a desire to more efficiently research Booth product data as a VRChat asset creator has taken shape as VRCFinder.</p>
<p>Booth has an incredible selection of products. VRCFinder adds search dimensions on top of that ecosystem. By enabling searches across style, color, body type, and compatible models, it helps connect you with your ideal items.</p>
<p>I'll continue working on expanding listed products and improving tag accuracy. Please note that since VRCFinder displays periodically collected data, always check the Booth product page for the latest pricing and availability information.</p>
<p>:::post-link{url="<a href="https://vrcfinder.net/ja/">https://vrcfinder.net/ja/</a>" text="VRCFinder - VRChat Booth Search Companion"}</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/dcfdda89fb99f31c13b0db8865df347b/VRCFinder_Home.jpg" medium="image"/></item><item><title><![CDATA[Genshin Miliastra Wonderland Tutorial: Complete Node Guide with Japanese UI Screenshots]]></title><description><![CDATA[Struggling with the Miliastra Wonderland tutorials because they use English UI? This guide provides screenshots of every node configuration captured in the Japanese UI. Covers both the 'First Wonderland Component' and 'First Wonderland' tutorial series. A practical companion guide to use alongside the official videos.]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/genshin-miliastra-tutorial-node-jp/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/genshin-miliastra-tutorial-node-jp/</guid><category><![CDATA[ugc]]></category><category><![CDATA[learning]]></category><pubDate>Sun, 09 Nov 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="/67554480a79b2b8c006ce0a90be475fb/000_UGC%E3%83%81%E3%83%A5%E3%83%BC%E3%83%88%E3%83%AA%E3%82%A2%E3%83%AB.webp" alt="Genshin Miliastra Wonderland main screen"></p>
<p>"The Miliastra Wonderland tutorial videos use English UI and I can't follow along..."</p>
<p>Sound familiar? Genshin Impact's "Miliastra Wonderland" is an amazing UGC (User-Generated Content) feature, but since the official tutorials use English UI, many Japanese-speaking users struggle to find the corresponding nodes in their Japanese interface.</p>
<p>This article provides <strong>screenshots of every node configuration from the tutorial videos, captured in the Japanese UI</strong>. Use it alongside the videos for a smoother learning experience.</p>
<p><strong>Related Links:</strong></p>
<ul>
<li><a href="https://genshin.hoyoverse.com/miliastra_wonderland/?lang=ja-jp">Miliastra Wonderland Official Site</a></li>
<li><a href="https://x.com/GI_Miliastra">Miliastra Wonderland Official X (formerly Twitter)</a></li>
<li><a href="https://www.youtube.com/@MiliastraWonderland_JP">Miliastra Wonderland Official YouTube</a></li>
</ul>
<p>:::toc</p>
<ul>
<li><a href="#about-realm">What Is Genshin - Miliastra Wonderland?</a></li>
<li><a href="#creator-level">About Creator Level</a>
<ul>
<li><a href="#creator-level-features">8 Major Features Unlocked</a></li>
<li><a href="#creator-level-rewards">Features Unlocked/Expanded at Each Level</a></li>
<li><a href="#level-up-method">How to Level Up</a></li>
</ul>
</li>
<li><a href="#tutorial-videos">About the Tutorial Videos</a></li>
<li><a href="#first-component">Build Your First Wonderland Component</a>
<ul>
<li><a href="#component-02">2. Setting and Modifying Custom Variables</a></li>
<li><a href="#component-04">4. Building a Player Node Graph</a></li>
</ul>
</li>
<li><a href="#first-realm">Build Your First Wonderland</a>
<ul>
<li><a href="#realm-02">2. Basic Motion Devices</a></li>
<li><a href="#realm-03">3. Effects</a></li>
<li><a href="#realm-04">4. Skills and Jobs</a></li>
<li><a href="#realm-05">5. Monster Status Presets</a></li>
<li><a href="#realm-06">6. Collision and Interaction</a></li>
<li><a href="#realm-07">7. Logic Design</a></li>
<li><a href="#realm-08">8. Building the Wonderland</a></li>
<li><a href="#realm-09">9. Interface</a></li>
</ul>
</li>
<li><a href="#summary">Summary</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>What Is Genshin - Miliastra Wonderland? {#about-realm}</h2>
<p><img src="/7ae9b9070142d00de1bec41ff0423f66/001_%E6%93%8D%E4%BD%9C%E7%94%BB%E9%9D%A2.jpg" alt="Genshin Miliastra Wonderland editor screen"></p>
<p>"Genshin Impact - Miliastra Wonderland" is a UGC (User-Generated Content) feature added to Genshin Impact on October 22, 2025. It lets you create original games using Genshin's assets through a visual scripting system.</p>
<h3>Getting Started</h3>
<ol>
<li>Complete the Genshin Impact prologue</li>
<li>The "Miliastra Wonderland" mode becomes unlocked</li>
<li>Select "My Wonderland"</li>
<li>Start node editing in the editor</li>
</ol>
<p>Node editing opens in a separate window where you connect visual nodes to build game logic.</p>
<p>:::ad</p>
<h2>About Creator Level {#creator-level}</h2>
<p><img src="/471255c77aeb29f923a825a3376de95d/002_%E3%82%AF%E3%83%AA%E3%82%A8%E3%82%A4%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%99%E3%83%AB.png" alt="Creator Level screen"></p>
<p>In Genshin's Miliastra Wonderland, some features require a "Creator Level" to access. Levels range from 1 to 5, with various features being unlocked or expanded at each level.</p>
<h3>8 Major Features Unlocked {#creator-level-features}</h3>
<p>The following 8 features are progressively unlocked and expanded based on your Creator Level:</p>
<ol>
<li><strong>Cloud Save</strong> - Cloud storage for wonderland data</li>
<li><strong>Inspiration Support Program</strong> - Creative support program</li>
<li><strong>Custom Text Display</strong> - Display custom text content</li>
<li><strong>Performance Box</strong> - Performance optimization features</li>
<li><strong>Achievements</strong> - Implementation of achievement elements</li>
<li><strong>In-Stage Data</strong> - Stage data management</li>
<li><strong>Ranking</strong> - Ranking system</li>
<li><strong>Rank System</strong> - Player rank features</li>
</ol>
<h3>Features Unlocked/Expanded at Each Level {#creator-level-rewards}</h3>
<p><img src="/299a690b696e6ba9678af09b374672c8/002_1-%E3%82%AF%E3%83%AA%E3%82%A8%E3%82%A4%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%99%E3%83%AB1.png" alt="Creator Level 1 rewards"></p>
<p><img src="/505a25155de937febf27e8f28d3bdea5/002_2-%E3%82%AF%E3%83%AA%E3%82%A8%E3%82%A4%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%99%E3%83%AB2.png" alt="Creator Level 2 rewards"></p>
<p><img src="/7550d28bf6ff6482ff8c4cfbf1a98765/002_3-%E3%82%AF%E3%83%AA%E3%82%A8%E3%82%A4%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%99%E3%83%AB3.png" alt="Creator Level 3 rewards"></p>
<p><img src="/ed4017b709e43883b45953f5c54a35b9/002_4-%E3%82%AF%E3%83%AA%E3%82%A8%E3%82%A4%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%99%E3%83%AB4.png" alt="Creator Level 4 rewards"></p>
<p><img src="/8573273bd71e8b37f7653a6de9d7ff05/002_5-%E3%82%AF%E3%83%AA%E3%82%A8%E3%82%A4%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%99%E3%83%AB5.png" alt="Creator Level 5 rewards"></p>
<h3>How to Level Up {#level-up-method}</h3>
<p>Miliastra Wonderland has a "Wonderland Level" earned by playing wonderlands, but this is separate from "Creator Level." Playing games won't increase your Creator Level. To level up, you need to complete <strong>Growth Tasks</strong>.</p>
<p><img src="/bb1224bc4bdfceef2c9a5620551713a8/004_%E3%82%AF%E3%83%AA%E3%82%A8%E3%82%A4%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%99%E3%83%AB%E3%81%AE%E4%B8%8A%E3%81%92%E6%96%B9.png" alt="How to raise Creator Level"></p>
<p><strong>Reaching Creator Level 2</strong></p>
<p>Complete two quizzes in the Creator Portal: "Wonderland Component Quiz Challenge" and "Wonderland Stage Quiz Challenge." The questions are easy to answer if you've watched the tutorials, so aim for Level 2 first.</p>
<p><img src="/51de3b45638cb92bfe43a07bf09bd85c/005_%E3%82%AF%E3%83%AA%E3%82%A8%E3%82%A4%E3%82%BF%E3%83%BC%E3%82%B3%E3%83%B3%E3%83%9D%E3%83%BC%E3%83%8D%E3%83%B3%E3%83%88%E3%82%AF%E3%82%A4%E3%82%BA.png" alt="Creator Component Quiz"></p>
<p><strong>Level 3 and Above</strong></p>
<p>You'll need to publish your created wonderland and have a certain number of players try it. Joining the official Discord to connect with other creators and share ideas is recommended.</p>
<p><strong>About Reward Changes</strong></p>
<p>Creator Level rewards may be updated in the future. For example, in the October 31, 2025 adjustment, the tab option that displays "Open" when opening treasure chests was moved from a Lv.3 reward to Lv.2.</p>
<p>Reference: <a href="https://genshin.hoyoverse.com/miliastra_wonderland/news/details?id=160687">About Creator Level and Benefit Adjustments</a></p>
<p>:::ad</p>
<h2>About the Miliastra Wonderland Tutorial Videos {#tutorial-videos}</h2>
<p><img src="/5b7dc184912e0b716d071d0929dc5687/006_UGC%E3%83%81%E3%83%A5%E3%83%BC%E3%83%88%E3%83%AA%E3%82%A2%E3%83%AB.png" alt="Genshin Miliastra Wonderland UGC Tutorial screen"></p>
<p>Tutorial videos for Genshin Impact's Miliastra Wonderland are available on the official Creator Portal, YouTube, and X accounts.</p>
<p><strong>Two video series available as of October 2025:</strong></p>
<ul>
<li><strong>"Build Your First Wonderland Component"</strong> - Learn the basics by creating coins and building a system where collecting 5 coins clears the stage</li>
<li><strong>"Build Your First Wonderland"</strong> - Learn to create and connect complex components like enemy spawning, leveling up, and treasure chest mechanics</li>
</ul>
<p>Both are well-structured tutorial videos, but since they're presented in English UI, finding the same nodes in Japanese can be surprisingly difficult.</p>
<p>This article supplements the official tutorials by compiling node configuration screenshots from the Japanese UI version. Refer to these images alongside the videos for a smoother learning experience.</p>
<p>:::ad</p>
<h2>Build Your First Wonderland Component {#first-component}</h2>
<h3>2. Setting and Modifying Custom Variables {#component-02}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=tXXd-y4qWnE">https://www.youtube.com/watch?v=tXXd-y4qWnE</a></p>
<p><strong>Node: "Prefab - Score on Pickup"</strong></p>
<p><img src="/7693e09899d73750a4bb29bca3e54102/A2_%E3%82%B3%E3%82%A4%E3%83%B3%E5%8F%96%E5%BE%97.png" alt="Prefab - Score on Pickup node configuration"></p>
<h3>4. Building a Player Node Graph {#component-04}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=9tzrJLzuDGI">https://www.youtube.com/watch?v=9tzrJLzuDGI</a></p>
<p><strong>Node: "Player - End Wonderland"</strong></p>
<p><img src="/98be6b857225593ec504f8970859af05/A3_%E3%82%B9%E3%83%86%E3%83%BC%E3%82%B8%E7%B5%82%E4%BA%86.png" alt="Player - End Wonderland node configuration"></p>
<p>:::ad</p>
<h2>Build Your First Wonderland {#first-realm}</h2>
<h3>2. Basic Motion Devices {#realm-02}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=1EswERt2m3c">https://www.youtube.com/watch?v=1EswERt2m3c</a></p>
<p><strong>Node: "Activate Cobblestone"</strong></p>
<p><img src="/5040679df7ed8fb7f843a79d81d06e18/B1_%E7%9F%B3%E7%95%B3%E8%B5%B7%E5%8B%95.png" alt="Activate Cobblestone node configuration"></p>
<p><strong>Node: "Collapse Cobblestone"</strong></p>
<p><img src="/dbd97a99ace911a380ddee638f0b24c5/B2_%E7%9F%B3%E7%95%B3%E5%B4%A9%E5%A3%8A.png" alt="Collapse Cobblestone node configuration"></p>
<h3>3. Effects {#realm-03}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=I8F0bvl4jKY">https://www.youtube.com/watch?v=I8F0bvl4jKY</a></p>
<p><strong>Node: "Light the Torch"</strong></p>
<p><img src="/52242ca8a3c483bc11eea538e14bff99/B3_%E6%9D%BE%E6%98%8E%E3%82%92%E7%81%AF%E3%81%99.png" alt="Light the Torch node configuration"></p>
<h3>4. Skills and Jobs {#realm-04}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=yQoQn5IzJqc">https://www.youtube.com/watch?v=yQoQn5IzJqc</a></p>
<p><strong>Node: "Acquire Combat Ability"</strong></p>
<p><img src="/37ac53a7020c78c23021fcbe1e73a08b/B4_%E6%88%A6%E9%97%98%E8%83%BD%E5%8A%9B%E3%81%AE%E7%8D%B2%E5%BE%97.png" alt="Acquire Combat Ability node configuration"></p>
<h3>5. Monster Status Presets {#realm-05}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=It5UWLvC9fg">https://www.youtube.com/watch?v=It5UWLvC9fg</a></p>
<p><strong>Node: "Reach Lv40"</strong></p>
<p><img src="/6ab59d3c5837adbebbbeacfbc372d1ac/B5_Lv40%E3%81%AB%E3%81%AA%E3%82%8B.png" alt="Reach Lv40 node configuration"></p>
<p><strong>Node: "Reach Lv90"</strong></p>
<p><img src="/e3501a61b1ae70332e16597e83607fa4/B6_Lv90%E3%81%AB%E3%81%AA%E3%82%8B.png" alt="Reach Lv90 node configuration"></p>
<p><strong>Node: "Unlock Treasure Chest"</strong></p>
<p><img src="/5d000853469483050cbe554e69c04474/B7_%E5%AE%9D%E7%AE%B1%E3%81%AE%E3%82%A2%E3%83%B3%E3%83%AD%E3%83%83%E3%82%AF.png" alt="Unlock Treasure Chest node configuration"></p>
<p><strong>Node: "Open Treasure Chest"</strong></p>
<p><img src="/54cebd9f4f2ecd2089f056af4362f7c2/B8_%E5%AE%9D%E7%AE%B1%E3%82%92%E9%96%8B%E3%81%8F.png" alt="Open Treasure Chest node configuration"></p>
<p>:::ad</p>
<h3>6. Collision and Interaction {#realm-06}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=ZrlBqOkgLmE">https://www.youtube.com/watch?v=ZrlBqOkgLmE</a></p>
<p><strong>Node: "Light the Torch" (with variable addition)</strong></p>
<p><img src="/fc40aadb87e503f8b214a5b105e32948/B9_%E6%9D%BE%E6%98%8E%E3%82%92%E7%81%AF%E3%81%99_%E4%BF%AE%E6%AD%A3.png" alt="Light the Torch (with variable addition) node configuration"></p>
<p><strong>Node: "Unlock Weapon"</strong></p>
<p><img src="/c81a3b06a81104d0feb43d04a84aed11/B10_%E6%AD%A6%E5%99%A8%E3%81%AE%E3%83%AD%E3%83%83%E3%82%AF%E8%A7%A3%E9%99%A4.png" alt="Unlock Weapon node configuration"></p>
<p><strong>Node: "Pick Up Key to Open Gate"</strong></p>
<p><img src="/b54fc5dbf2d76cb780316c9d01c5c33d/B11_%E9%8D%B5%E3%82%92%E6%8B%BE%E3%81%A3%E3%81%A6%E3%82%B2%E3%83%BC%E3%83%88%E3%82%92%E9%96%8B%E3%81%91%E3%82%8B.png" alt="Pick Up Key to Open Gate node configuration"></p>
<p><strong>Node: "Open Wooden Gate"</strong></p>
<p><img src="/1bd9e04852294eb8779b1829368d7bcb/B12_%E6%9C%A8%E3%81%AE%E3%82%B2%E3%83%BC%E3%83%88%E3%82%92%E9%96%8B%E3%81%91%E3%82%8B.png" alt="Open Wooden Gate node configuration"></p>
<p><strong>Node: "Exit Wonderland"</strong></p>
<p><img src="/35b6b942b22b353c4d5473ecd5d24519/B13_%E5%B9%BB%E5%A2%83%E3%81%8B%E3%82%89%E9%80%80%E5%87%BA.png" alt="Exit Wonderland node configuration"></p>
<h3>7. Logic Design {#realm-07}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=cL1VYOA5zDg">https://www.youtube.com/watch?v=cL1VYOA5zDg</a></p>
<p><strong>Node: "Acquire Combat Ability" (Hilichurl spawn version)</strong></p>
<p><img src="/20ec4a40d95c5af4a650c1c3f6dd30f5/B14_%E6%88%A6%E9%97%98%E8%83%BD%E5%8A%9B%E3%81%AE%E7%8D%B2%E5%BE%97_2.png" alt="Acquire Combat Ability (Hilichurl spawn version) node configuration"></p>
<p><strong>Node: "When Reaching Lv40" (Hilichurl version)</strong></p>
<p><img src="/bc123b74eb3d16f283b16a18eff7eaaa/B15_%E3%83%AC%E3%83%99%E3%83%AB40%E3%81%AB%E3%81%AA%E3%82%8B%E3%81%A8%E3%81%8D(%E3%83%92%E3%83%AB%E3%83%81%E3%83%A3%E3%83%BC%E3%83%AB).png" alt="When Reaching Lv40 (Hilichurl version) node configuration"></p>
<p><strong>Node: "Open Treasure Chest" (key spawn version)</strong></p>
<p><img src="/995b56a143bf5590e916330e993f7239/B16_%E5%AE%9D%E7%AE%B1%E3%82%92%E9%96%8B%E3%81%8F(%E9%8D%B5%E3%81%AE%E5%87%BA%E7%8F%BE).png" alt="Open Treasure Chest (key spawn version) node configuration"></p>
<p><strong>Node: "Player End Wonderland" (with effects + exit device activation)</strong></p>
<p><img src="/08edd1acedbcd6b185e3ff6a9c1ca69a/B17_%E3%83%97%E3%83%AC%E3%82%A4%E3%83%A4%E3%83%BC%E5%B9%BB%E5%A2%83%E7%B5%82%E4%BA%86.png" alt="Player End Wonderland node configuration"></p>
<p>:::ad</p>
<h3>8. Building the Wonderland {#realm-08}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=vHCWOKq8kTI">https://www.youtube.com/watch?v=vHCWOKq8kTI</a></p>
<p><strong>Node: "Activate Cobblestone" (5 cobblestones simultaneous activation)</strong></p>
<p><img src="/70e533cf3c82b243b3872845da8dc2f7/B18_%E7%9F%B3%E7%95%B3%E8%B5%B7%E5%8B%95(5%E3%81%A4).png" alt="Activate Cobblestone (5 simultaneous) node configuration"></p>
<p><strong>Node: "Player - End Wonderland" (with effects added)</strong></p>
<p><img src="/6b3ef3ac777319430f3f22f7b8ab9775/B19_%E3%83%97%E3%83%AC%E3%82%A4%E3%83%A4%E3%83%BC_%E5%B9%BB%E5%A2%83%E7%B5%82%E4%BA%86(%E3%82%A8%E3%83%95%E3%82%A7%E3%82%AF%E3%83%88%E8%BF%BD%E5%8A%A0).png" alt="Player - End Wonderland (with effects) node configuration"></p>
<h3>9. Interface {#realm-09}</h3>
<p><strong>Official Tutorial Video</strong>
<a href="https://www.youtube.com/watch?v=Z-f88zYNrlo">https://www.youtube.com/watch?v=Z-f88zYNrlo</a></p>
<p><strong>Nodes: "Popup - Wonderland Description", "Textbox - Find Weapon", "Textbox - Find Key"</strong></p>
<p>*Create 3 nodes with the same structure, only changing the prefab index</p>
<p><img src="/d64a7097e2fd822c0fc03154fe371bc6/B20_%E3%83%9D%E3%83%83%E3%83%97%E3%82%A2%E3%83%83%E3%83%97_%E5%B9%BB%E5%A2%83%E8%AA%AC%E6%98%8E.png" alt="Popup and Textbox node configuration"></p>
<p>:::ad</p>
<h2>Summary {#summary}</h2>
<p>Genshin Impact's "Miliastra Wonderland" is a groundbreaking UGC feature that lets you create original games using assets from the major title Genshin Impact. The official tutorial videos are very comprehensive, but since they're presented in English UI, finding the same nodes in the Japanese UI can sometimes be a challenge.</p>
<p>This article covered the node configuration screens in Japanese UI for both the "Build Your First Wonderland Component" and "Build Your First Wonderland" tutorial series.</p>
<h3>For Those Just Getting Started with Miliastra Wonderland</h3>
<ol>
<li><strong>Start with the tutorials</strong>: Learn the basics with the official "Build Your First Wonderland Component" series</li>
<li><strong>Aim for Creator Level 2</strong>: Pass two quizzes to reach Lv.2 and unlock many more features</li>
<li><strong>Use this article as a reference</strong>: Refer to the screenshots in this article while watching the videos for smoother progress</li>
</ol>
<h3>Tips for Creating</h3>
<p>In Miliastra Wonderland, you can freely create games using Genshin's beautiful assets. The official Discord lets you connect with other creators and share your work, so don't hesitate to get involved.</p>
<p>Level up your Creator Level, master the various features, and create captivating wonderlands. I hope this article helps you on your content creation journey in Genshin Impact's Miliastra Wonderland.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/67554480a79b2b8c006ce0a90be475fb/000_UGC%E3%83%81%E3%83%A5%E3%83%BC%E3%83%88%E3%83%AA%E3%82%A2%E3%83%AB.webp" medium="image"/></item><item><title><![CDATA[Creating Color Palettes Was Too Tedious, So I Built 'Character Color Pattern Maker']]></title><description><![CDATA[I got tired of duplicating layers and swapping colors one by one to try different color schemes, so I built 'Character Color Pattern Maker'. Lock skin and linework colors in place while changing only clothing and hair colors with a single click.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/character-color-pattern-maker/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/character-color-pattern-maker/</guid><pubDate>Fri, 29 Aug 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Creating color variations for characters is surprisingly tedious. When you think "what would this character look like with different hair color?", you have to duplicate layers, swap colors part by part, and repeat the whole process over and over.</p>
<p>You could apply a global hue/saturation adjustment, but that changes skin tones and linework too, so it doesn't work for purely comparing color schemes. It's hard to meet that specific need of "keep the skin and linework as-is, just change the clothes and hair."</p>
<p>That's why I built a tool that lets you load an image in your browser, lock the colors you want to keep, and generate new color patterns with a single click.</p>
<p><img src="/96d56f65d1e893900e1582078cd3648d/001_character-color-pattern-maker.webp" alt="">
*Sample illustration is fan art drawn by the site owner</p>
<p>:::post-link{url="/en/tools/character-color-pattern-maker/" text="Character Color Pattern Maker"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#introduction">The Hassle of Creating Color Patterns</a></li>
<li><a href="#what-is-ccpm">What Is This Tool?</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#how-to-use">How to Use</a></li>
<li><a href="#palette-types">Palette Generation Modes</a></li>
<li><a href="#advanced-tips">Using Color Pattern Sheets</a></li>
<li><a href="#tech-info">Secure Browser-Based Processing</a></li>
</ul>
<p>:::</p>
<h2>The Hassle of Creating Color Patterns {#introduction}</h2>
<p><img src="/d03349435523e5d1afad37f0eed2898f/color-pattern-sheet-pastel-1756088056388.png" alt=""></p>
<p>After finishing an illustration, it's common to wonder "what would it look like in different colors?" But duplicating layers and swapping colors part by part is honestly a chore.</p>
<p>Applying a global color adjustment is another option, but it changes skin tones and linework too, defeating the purpose of a pure color comparison. Even when you just want to change the clothes and hair while keeping skin and linework intact, doing it manually is time-consuming.</p>
<p>To solve this problem, I built this tool. It generates color patterns with a single click, and sometimes you'll stumble upon unexpectedly interesting color combinations.</p>
<h2>What Is This Tool? {#what-is-ccpm}</h2>
<p><img src="/f4ff950fdb504bf0f8b6daf3d6620525/002_character-color-pattern-maker_sample1.webp" alt=""></p>
<p>It's a browser-based tool that lets you upload an illustration and generate color patterns with a single click. Its standout feature is the ability to lock colors you don't want to change (skin, linework, etc.) and exclude colors you don't need (background, etc.).</p>
<p>This means you can keep the character's skin and hair colors untouched while only changing the clothing and accessory colors. You can try patterns based on color theory or discover unexpected combinations through random generation.</p>
<p>:::post-link{url="/en/tools/character-color-pattern-maker/" text="Character Color Pattern Maker"}</p>
<p>:::ad</p>
<h2>Features {#features}</h2>
<ul>
<li>Analyzes illustrations using a perceptually uniform color space (CIELAB) to extract key colors</li>
<li>Automatic linework detection for easily locking outline colors</li>
<li>Click on the image to lock colors you want to keep and exclude colors you don't need</li>
<li>Multiple palette generation modes including random exploration, color theory, pastel, and more</li>
<li>Real-time preview of generated color schemes</li>
<li>Download color pattern sheets as PNG images showing before-and-after illustrations with key color codes</li>
<li>All processing happens within the browser - illustration data is never sent externally</li>
<li>Simple drag-and-drop interface</li>
</ul>
<h2>How to Use {#how-to-use}</h2>
<p><img src="/0b225dc116b27626a6141ad76cb9fdad/003_character-color-pattern-maker_sample2.webp" alt=""></p>
<p>The basic workflow is 3 steps.</p>
<h3>1. Upload Your Image</h3>
<p><img src="/9546d42b461e388aa285f4d73bd55d48/004-howto-import.png" alt=""></p>
<p>Drag and drop the illustration you want to recolor onto the left side of the screen, or click to select a file. Character illustrations with transparent backgrounds work best.</p>
<h3>2. Lock and Exclude Colors</h3>
<p><img src="/abe4e900e83d48b6dbd73f8a83fbc27d/005-howto-color-settings.png" alt=""></p>
<p>Once the image is analyzed, set which colors you want to keep unchanged. The most intuitive approach is to click directly on the original image to "lock" specific colors.</p>
<p>Start by clicking on skin and hair colors in the preview to lock them. Then exclude background colors. Finally, locking the auto-detected linework color will improve accuracy.</p>
<h3>3. Generate and Download Color Patterns</h3>
<p><img src="/50832b5118731c8aa63b5bf819fdc0b0/006-howto-pattern-generate.png" alt=""></p>
<p>Click "Random Exploration" or any color pattern button to transform all colors except the locked and excluded ones into a new scheme. Clicking the same button repeatedly generates different patterns each time. When you find a color scheme you like, save it with the "Download Color Sheet" button.</p>
<h2>Palette Generation Modes {#palette-types}</h2>
<p>Multiple modes are available so you can try a wide variety of color patterns.</p>
<ul>
<li><strong>Random Exploration</strong>: Each click suggests an entirely new color combination. You might discover something unexpected.</li>
<li><strong>Color Theory Mode</strong>: Generates palettes based on color theory principles like analogous and complementary colors. Results feel cohesive yet fresh.</li>
<li><strong>Pastel</strong>: Transforms the entire palette into soft, gentle tones. Great for achieving a dreamy, cute aesthetic.</li>
<li><strong>Vibrant</strong>: Boosts saturation and brightness for vivid, lively color schemes. Works well for pop-style, energetic characters.</li>
<li><strong>Others</strong>: Try "Blend" mode for mixed colors, "Creative" mode for unconventional combinations, and more.</li>
</ul>
<h2>Using Color Pattern Sheets {#advanced-tips}</h2>
<p><img src="/d03349435523e5d1afad37f0eed2898f/color-pattern-sheet-pastel-1756088056388.png" alt=""></p>
<p><img src="/4edb3e00e85bffe7a2f0b3e4f2d62b41/color-pattern-sheet-colorTheory-1756094742949.png" alt=""></p>
<p><img src="/b03f018c3a109c4b1e9d997eff776579/color-pattern-sheet-random-1756094726295.png" alt=""></p>
<p>When you find a color scheme you like, press the "Download Color Sheet" button.</p>
<p>The downloadable color pattern sheet lets you compare before-and-after illustrations at a glance, complete with key color codes.</p>
<ul>
<li><strong>As design reference</strong>: Use as comparison materials when proposing new color schemes to your team or clients.</li>
<li><strong>Social media sharing</strong>: Perfect for asking followers "which color scheme do you prefer?"</li>
<li><strong>Self-review</strong>: By comparing different patterns side by side, you can discover your own color preferences and new possibilities.</li>
</ul>
<h2>Secure Browser-Based Processing {#tech-info}</h2>
<p>This tool is designed with privacy as the top priority. From image uploading to color analysis, palette generation, and image downloading, all processing happens entirely within your browser.</p>
<p>Your illustration data is never sent to or stored on any external server. Once the page is loaded, basic functionality even works offline. You can safely use it with unpublished work.</p>
<p>:::post-link{url="/en/tools/character-color-pattern-maker/" text="Character Color Pattern Maker"}</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/d03349435523e5d1afad37f0eed2898f/color-pattern-sheet-pastel-1756088056388.png" medium="image"/></item><item><title><![CDATA[Taking Hand Reference Photos Was Too Hard, So I Built 'Hand Trace Camera']]></title><description><![CDATA[When drawing hands, trying to photograph your own hand as reference while holding a smartphone at the right angle is nearly impossible. I built 'Hand Trace Camera', a tool that overlays your rough sketch on the camera feed for easy reference shots.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/hand-trace-camera/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/hand-trace-camera/</guid><pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>When drawing illustrations, hands are one of the toughest parts to get right. The complex joint movements, the way fingers bend, the angle of the palm... And when you try to photograph your own hand as a reference, it's nearly impossible to get the ideal angle while holding your smartphone.</p>
<p>That's why I built "Hand Trace Camera" - a tool that overlays your rough hand sketch semi-transparently on the camera feed, letting you capture reference photos at exactly the pose and angle you need.</p>
<p><img src="/1d31b00af6119af1ad6ace206050046f/howto-hand-trace-camera.webp" alt=""></p>
<p>All processing happens entirely within the browser, so your images and photos are never sent externally.</p>
<p>:::post-link{url="/en/tools/hand-trace-camera/" text="Hand Trace Camera"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#introduction">The Challenge of Reference Photography</a></li>
<li><a href="#what-is-htc">What Is This Tool?</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#how-to-use">How to Use</a></li>
<li><a href="#privacy">Runs Entirely in Your Browser</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>The Challenge of Reference Photography {#introduction}</h2>
<p>Hands are one of the most structurally complex parts of the human body. Each of the five fingers has three joints, and when you factor in wrist angle, there are nearly infinite pose variations.</p>
<p>Common frustrations with reference photography:</p>
<ul>
<li>It's physically difficult to photograph your own hand at the angle you want to draw</li>
<li>Pressing the shutter button throws off your pose</li>
<li>It's hard to compare your sketch with your actual hand positioning while shooting</li>
<li>Photographing your non-dominant hand feels unnatural</li>
</ul>
<p>"Hand Trace Camera" solves these problems with an overlay approach that superimposes your rough sketch on the camera feed.</p>
<h2>What Is This Tool? {#what-is-htc}</h2>
<p><img src="/1d31b00af6119af1ad6ace206050046f/howto-hand-trace-camera.webp" alt=""></p>
<p>It's a tool that overlays your rough hand sketch semi-transparently on the camera feed, allowing you to capture the perfect reference photo.</p>
<p>It comes in handy when you've decided on a hand pose but can't quite nail the exact form, when you need to check subtle finger angles, or when you want to accurately depict how a hand grips an object.</p>
<p>Since you can freely adjust the sketch's opacity, you can compare your real hand with the overlay and find the optimal angle and pose.</p>
<p>:::post-link{url="/en/tools/hand-trace-camera/" text="Hand Trace Camera"}</p>
<p>:::ad</p>
<h2>Features {#features}</h2>
<p><img src="/cac6742c157cc209d4a6c69736f83510/hand-trace-camera.png" alt=""></p>
<ul>
<li>Overlay your uploaded image semi-transparently on the camera feed and capture while checking in real time</li>
<li>Freely adjust opacity from 0-100% for fine control over the balance between sketch and live view</li>
<li>Fine-tune size (10-500%), position (X/Y axis), and rotation (-180 to 180 degrees) to perfectly align your sketch with your actual hand</li>
<li>On mobile devices, use one finger to move and two fingers to pinch-zoom and rotate</li>
<li>Supports front/back camera switching (on devices with multiple cameras)</li>
<li>Saves as high-resolution 900x1200 pixel PNG images in a 3:4 portrait aspect ratio</li>
<li>All processing happens within the browser - images and photos are never sent to external servers</li>
<li>Adjustments are instantly reflected in the preview for a smooth, stress-free experience</li>
</ul>
<h2>How to Use {#how-to-use}</h2>
<p>The basic workflow is 3 steps.</p>
<ol>
<li><strong>Upload Your Rough Sketch</strong></li>
</ol>
<p>Drag and drop your rough hand sketch, or click to upload. Supports PNG, JPG, and WEBP formats.</p>
<ol start="2">
<li><strong>Adjust the Image and Start the Camera</strong></li>
</ol>
<p>Adjust opacity, size, position, and rotation so your sketch aligns with your real hand. When ready, click the "Start Camera" button.</p>
<ol start="3">
<li><strong>Match Your Pose and Capture</strong></li>
</ol>
<p>Pose your real hand to match the sketch and click the "Capture" button. You can preview the captured image before downloading it.</p>
<p>On mobile devices, you can adjust the image intuitively by touching the screen.</p>
<p>:::ad</p>
<h2>Runs Entirely in Your Browser {#privacy}</h2>
<p>This tool is designed to handle all processing entirely within your browser. No special software installation is required.</p>
<p>From image uploading to camera feed processing to image compositing, everything runs in the browser. Your uploaded sketches and captured photos are never sent to or stored on any external server.</p>
<p>:::post-link{url="/en/tools/hand-trace-camera/" text="Hand Trace Camera"}</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/1d31b00af6119af1ad6ace206050046f/howto-hand-trace-camera.webp" medium="image"/></item><item><title><![CDATA[I Failed Steam Review Multiple Times, So I Made a Checklist of All 14 Required Images]]></title><description><![CDATA[A complete list of all 14 images required for Steam with exact sizes and review criteria. Includes lessons from actual rejections about design mistakes to avoid.]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/steam-release-images-checklist/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/steam-release-images-checklist/</guid><category><![CDATA[marketing]]></category><pubDate>Fri, 18 Jul 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="/6cccbd788c576fcc91d23d99032cab41/20180718-Steam%E3%82%B9%E3%83%88%E3%82%A2%E3%83%9A%E3%83%BC%E3%82%B8.png" alt=""></p>
<p>I recently released <a href="/en/product/animsprite-pixelizer/">AnimSprite Pixelizer</a> on Booth, itch.io, and Steam — a tool for 2D game development that batch-converts hand-drawn character walk animations (created in tools like CLIP Studio) to a common pixel size and exports them as sprite sheets.</p>
<p><img src="/1fdb2e9a28c745394d10e0023911acc8/movie_convert.webp" alt=""></p>
<p>With Booth and itch.io, you can start selling right after registering your product information and images. Steam, however, requires passing a review by the support team first.</p>
<p>For AnimSprite Pixelizer, the app itself passed review on the first try, but I was asked to make corrections to the product images several times. In this article, I'll cover the 14 images required for Steam and key design pitfalls to avoid.</p>
<p>:::post-link{url="/en/product/animsprite-pixelizer/" text="AnimSprite Pixelizer"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#why-so-many-images">Why Are 14 Images Needed?</a></li>
<li><a href="#image-checklist">Required Image Checklist</a></li>
<li><a href="#design-cautions">Design Pitfalls to Avoid</a></li>
<li><a href="#conclusion">Tips for Efficient Production</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Why Are 14 Images Needed? {#why-so-many-images}</h2>
<p><strong>"Preparing store images is such a pain!!"</strong></p>
<p>Every creator who's sold on Steam has said this at some point. And for good reason — you need to prepare at least 14 images for Steam's review.</p>
<p>"Why so many?!" If you're wondering, take a closer look at Steam:</p>
<ul>
<li>Small images displayed in search result lists (Small Capsule)</li>
<li>Featured images at the top of the front page (Main Capsule)</li>
<li>Tall images displayed during seasonal sales (Vertical Capsule)</li>
<li>The hero image displayed at the top when you select a game in your library, alongside play time</li>
</ul>
<p>You need to prepare all of these.</p>
<p>What's more, these images have strict pixel-exact size requirements. For example, the Header Capsule is 920x430px and the Small Capsule is 462x174px. They must be exactly these dimensions. SteamWorks assigns images based on the pixel size of the dropped file, so even a slight size difference will result in rejection.</p>
<p><img src="/0ac5b37194d5ea2f8ad4af44ed414122/20250718-Steam%E3%83%89%E3%83%AD%E3%83%83%E3%83%97%E3%83%9C%E3%83%83%E3%82%AF%E3%82%B9.png" alt=""></p>
<p>:::ad</p>
<h2>Required Image Checklist {#image-checklist}</h2>
<p>I've created a checklist of all required images. Use this as a reference while designing in Photoshop or your preferred image editor.</p>
<p><img src="/0f9e02fb6cbaa7f576082b6b39ea8b0c/20250718-Steam%E3%82%B9%E3%83%88%E3%82%A2%E7%99%BB%E9%8C%B2%E3%81%AB%E5%BF%85%E8%A6%81%E3%81%AA%E7%94%BB%E5%83%8F%E4%B8%80%E8%A6%A7.png" alt=""></p>
<p>The graphic assets needed for Steam store registration fall into three categories: "Store Assets," "Screenshot Assets," and "Library Assets." Use the checklist below as your guide.</p>
<p><em>Sizes are as of July 2025.</em></p>
<h3>Store Assets</h3>
<p><img src="/c77bdaeb7a069dd27c406bf18bdd4c42/20250718-Steam%E5%90%88%E6%A0%BC%E7%94%BB%E5%83%8F_1_%E3%82%B9%E3%83%88%E3%82%A2%E3%82%A2%E3%82%BB%E3%83%83%E3%83%88.jpg" alt=""></p>
<table>
<thead>
<tr>
<th>Image Type</th>
<th>Size (px)</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Header Capsule</strong></td>
<td>920x430</td>
<td>Top of store page, search results, etc.</td>
</tr>
<tr>
<td><strong>Small Capsule</strong></td>
<td>462x174</td>
<td>Search result list view</td>
</tr>
<tr>
<td><strong>Main Capsule</strong></td>
<td>1232x706</td>
<td>Front page featured section</td>
</tr>
<tr>
<td><strong>Vertical Capsule</strong></td>
<td>748x896</td>
<td>Tall format for sale events</td>
</tr>
</tbody>
</table>
<h3>Screenshot Assets</h3>
<p><img src="/e4e63a16f728e8b6cf60d50c4517b174/20250718-Steam%E5%90%88%E6%A0%BC%E7%94%BB%E5%83%8F_2_%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%BC%E3%83%B3%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%E3%82%A2%E3%82%BB%E3%83%83%E3%83%88.jpg" alt=""></p>
<ul>
<li><strong>Screenshots</strong>: 1920x1080 (recommended) x <strong>minimum 5</strong></li>
</ul>
<h3>Library Assets</h3>
<p><img src="/cd11aadea1ccfadf5aaf5e72e966634f/20250718-Steam%E5%90%88%E6%A0%BC%E7%94%BB%E5%83%8F_3_%E3%83%A9%E3%82%A4%E3%83%96%E3%83%A9%E3%83%AA%E3%82%A2%E3%82%BB%E3%83%83%E3%83%88.jpg" alt=""></p>
<table>
<thead>
<tr>
<th>Image Type</th>
<th>Size (px)</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Library Capsule</strong></td>
<td>600x900</td>
<td>Library grid view</td>
</tr>
<tr>
<td><strong>Library Header</strong></td>
<td>920x430</td>
<td>Top of library detail page</td>
</tr>
<tr>
<td><strong>Library Hero</strong></td>
<td>3840x1240</td>
<td>Large library banner</td>
</tr>
<tr>
<td><strong>Library Logo</strong></td>
<td>1280x720</td>
<td>Game logo display</td>
</tr>
</tbody>
</table>
<h3>Other</h3>
<ul>
<li><strong>Community Icon</strong>: 184x184</li>
</ul>
<p>:::ad</p>
<h2>Design Pitfalls to Avoid {#design-cautions}</h2>
<p>Just like image sizes, there are design-related rejection criteria to watch out for.</p>
<h3>Warning A: "No Text Besides Your Logo!"</h3>
<p><img src="/4fe719516132cf757962b532d8665a57/20250718-Steam%E5%90%88%E6%A0%BC%E7%94%BB%E5%83%8F_4_%E3%83%87%E3%82%B6%E3%82%A4%E3%83%B3%E6%B3%A8%E6%84%8F%E7%82%B9A.jpg" alt=""></p>
<p>When designing, it's tempting to add descriptive text, but any text other than your logo will result in rejection. Even small text like the examples below is not allowed.</p>
<p>Marketing copy, quotes, or any text information beyond your logo will cause your submission to fail.</p>
<h3>Warning B: "Maximize Legibility!"</h3>
<p><img src="/e37d938e0f85f36d17e235e64d55676e/20250718-Steam%E3%82%B9%E3%83%88%E3%82%A2%E7%99%BB%E9%8C%B2_5_%E3%83%87%E3%82%B6%E3%82%A4%E3%83%B3%E6%B3%A8%E6%84%8F%E7%82%B9B.jpg" alt=""></p>
<p>In the case above, I embedded UI screenshots to convey the app's workflow, but this was rejected. After removing the UI images and displaying elements larger, it passed.</p>
<p>Looking at it objectively, the revised version clearly has better legibility. Remember that these images often appear as thumbnails, and design accordingly.</p>
<h2>Tips for Efficient Production {#conclusion}</h2>
<p>Following the guidelines in this article should help you register your Steam store information smoothly.</p>
<p>The Steam support team provides quite specific feedback about what needs to be fixed, which was very helpful. While there are many articles explaining image sizes, there's almost no information about design rejection criteria, so I hope this serves as a useful reference.</p>
<p>A good workflow is to start designing with the Header Capsule (920x430), save it as a new file, change the canvas size, adjust element positions, save as another new file, and repeat. This way, you can efficiently produce multiple image variants by mainly adjusting positions.</p>
<p>The most tedious parts of the Steam store submission process are probably the tax information registration when creating your SteamWorks account and creating the store images. Get past these hurdles and get your game released!</p>
<p>Also, for guidance on which languages to localize for when selling on Steam, check out my <a href="/en/blog/gamedev/steam-game-localization-language/">previous article</a>.</p>
<p>And give <a href="https://store.steampowered.com/app/3849460/AnimSprite_Pixelizer__Convert_Handdrawn_Animations_to_Pixel_Art/">AnimSprite Pixelizer</a> a look too. It's great for 2D game developers who also do their own art.</p>
<p>:::post-link{url="/en/product/animsprite-pixelizer/" text="AnimSprite Pixelizer"}</p>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/cd11aadea1ccfadf5aaf5e72e966634f/20250718-Steam%E5%90%88%E6%A0%BC%E7%94%BB%E5%83%8F_3_%E3%83%A9%E3%82%A4%E3%83%96%E3%83%A9%E3%83%AA%E3%82%A2%E3%82%BB%E3%83%83%E3%83%88.jpg" medium="image"/></item><item><title><![CDATA[Which Languages Should You Localize Your Steam Game Into? A Data-Driven Priority Guide (2025)]]></title><description><![CDATA[Analysis of Steam's official statistics to determine which languages indie game developers should prioritize for localization. English and Simplified Chinese alone cover 63% of users.]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/steam-game-localization-language/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/steam-game-localization-language/</guid><category><![CDATA[marketing]]></category><pubDate>Fri, 04 Jul 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="./images/movie_convert.webp" alt=""></p>
<p>I recently released <a href="/en/product/animsprite-pixelizer/">AnimSprite Pixelizer</a> on Booth, itch.io, and Steam — a tool for 2D game development that batch-converts hand-drawn character walk animations (created in tools like CLIP Studio) to a common pixel size and exports them as sprite sheets.</p>
<p><img src="/1c3e4c916282fad3c7d881d149e604b0/thumb-animsprite-pixelizer.jpg" alt=""></p>
<p>After completing the app, I decided to tackle localization as well.</p>
<p>The key question that comes up is: "Which languages should I localize into?" In this article, I'll analyze the "<a href="https://store.steampowered.com/hwsurvey/">Steam Hardware &#x26; Software Survey</a>" data that Steam publishes monthly to help indie and small-team developers decide which languages to prioritize.</p>
<p>:::post-link{url="/en/product/animsprite-pixelizer/" text="AnimSprite Pixelizer"}</p>
<p>:::post-link{url="<a href="https://store.steampowered.com/hwsurvey/">https://store.steampowered.com/hwsurvey/</a>" text="Steam Hardware &#x26; Software Survey"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#steam-language-data">Current Steam Language Data</a></li>
<li><a href="#top-languages">Top 10 Languages and Market Trends</a></li>
<li><a href="#localization-strategy">Localization Priority Rankings</a></li>
<li><a href="#practical-tips">Practical Implementation Tips</a></li>
<li><a href="#conclusion">Designing for Localization from Day One</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Current Steam Language Data {#steam-language-data}</h2>
<p>According to the latest data from Steam's official "Hardware &#x26; Software Survey," the current language distribution among Steam users looks like this. While participation is voluntary, it remains the most reliable official statistic available for the Steam platform.</p>
<p><img src="/d798e490248249208ce115800b86d8e4/20250704-1-Steam202506%E8%A8%80%E8%AA%9E%E5%88%A5%E3%83%A6%E3%83%BC%E3%82%B6%E3%83%BC%E6%95%B0.png" alt=""></p>
<p><em>Steam Hardware &#x26; Software Survey: June 2025</em></p>
<h2>Top 10 Languages and Market Trends {#top-languages}</h2>
<p>Here's the current ranking of Steam users by language:</p>
<table>
<thead>
<tr>
<th>Rank</th>
<th>Language</th>
<th>User Share</th>
<th>vs Previous</th>
<th>Trend</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>1st</strong></td>
<td><strong>English</strong></td>
<td><strong>36.31%</strong></td>
<td>-1.93%</td>
<td>Declining</td>
</tr>
<tr>
<td><strong>2nd</strong></td>
<td><strong>Simplified Chinese</strong></td>
<td><strong>26.73%</strong></td>
<td>+2.61%</td>
<td>Strong growth</td>
</tr>
<tr>
<td><strong>3rd</strong></td>
<td><strong>Russian</strong></td>
<td>9.46%</td>
<td>+0.43%</td>
<td>Growing</td>
</tr>
<tr>
<td>4th</td>
<td>Spanish (Spain)</td>
<td>4.34%</td>
<td>-0.40%</td>
<td>Declining</td>
</tr>
<tr>
<td>5th</td>
<td>Portuguese (Brazil)</td>
<td>3.87%</td>
<td>-0.35%</td>
<td>Declining</td>
</tr>
<tr>
<td>6th</td>
<td>German</td>
<td>2.86%</td>
<td>-0.19%</td>
<td>Declining</td>
</tr>
<tr>
<td>7th</td>
<td>Japanese</td>
<td>2.59%</td>
<td>-0.10%</td>
<td>Slight decline</td>
</tr>
<tr>
<td>8th</td>
<td>French</td>
<td>2.33%</td>
<td>-0.13%</td>
<td>Declining</td>
</tr>
<tr>
<td>9th</td>
<td>Polish</td>
<td>1.68%</td>
<td>-0.09%</td>
<td>Slight decline</td>
</tr>
<tr>
<td>10th</td>
<td>Korean</td>
<td>1.48%</td>
<td>+0.27%</td>
<td>Growing</td>
</tr>
</tbody>
</table>
<h3>Key Market Trends</h3>
<p><strong>Growing language markets:</strong></p>
<ul>
<li><strong>Simplified Chinese</strong>: +2.61% (highest growth). Likely driven by the maturing PC gaming market in China.</li>
<li><strong>Russian</strong>: +0.43%. A market known for strong affinity with indie games.</li>
<li><strong>Korean</strong>: +0.27%. A market with passionate gaming communities.</li>
<li><strong>Traditional Chinese</strong>: +0.07%</li>
<li><strong>Thai</strong>: +0.07%</li>
</ul>
<p><strong>Declining language markets:</strong></p>
<ul>
<li><strong>English</strong>: -1.93% (largest decline)</li>
<li><strong>Spanish (Spain)</strong>: -0.40%</li>
<li><strong>Portuguese (Brazil)</strong>: -0.35%</li>
<li><strong>German</strong>: -0.19%</li>
</ul>
<p>Japanese users account for just 2.59% — a reminder of how vast the global market is.</p>
<p>The most noteworthy insight is that <strong>English and Simplified Chinese combined cover over 63% of all users</strong>. If you're going to localize, supporting these two languages is practically essential.</p>
<p>Simplified Chinese users have been surging in recent years. In the August 2024 survey — driven by the massive buzz around "Black Myth: Wukong" — Simplified Chinese actually surpassed English to claim the top spot.</p>
<div class="post-link">
<a href="https://www.gamespark.jp/article/2024/09/03/144711.html" target="_blank">Reference: Chinese overtakes English as top Steam language, possibly influenced by Black Myth: Wukong (2024/09/03) – GameSpark</a>
</div>
<p>:::ad</p>
<h2>Localization Priority Rankings {#localization-strategy}</h2>
<p>Let's think about which languages to prioritize with limited development resources.</p>
<p><img src="/782663fde6bd4660106dc5177a9ea33e/20250704-2-Steam202506%E3%83%AD%E3%83%BC%E3%82%AB%E3%83%A9%E3%82%A4%E3%82%BA%E5%84%AA%E5%85%88%E5%BA%A6.png" alt=""></p>
<h3>Tier-Based Priority Framework</h3>
<h4>Tier 1 (Essential)</h4>
<ul>
<li><strong>English</strong> (36.31%) — Still the largest user base and the global standard language</li>
<li><strong>Simplified Chinese</strong> (26.73%) — Rapidly growing with extremely high future potential</li>
</ul>
<p>These two languages alone cover over 63% of users. For small teams and solo developers, focusing on these first is the most efficient approach.</p>
<h4>Tier 2 (High Priority)</h4>
<ul>
<li><strong>Russian</strong> (9.46%) — Steady growth trend, large user base</li>
<li><strong>Spanish (Spain)</strong> (4.34%) — Declining but still a major market. Also potentially reaches the vast Spanish-speaking Latin American audience.</li>
<li><strong>Portuguese (Brazil)</strong> (3.87%) — An important South American market</li>
</ul>
<h4>Tier 3 (Medium Priority)</h4>
<ul>
<li><strong>German</strong> (2.86%) — Major European market</li>
<li><strong>Japanese</strong> (2.59%) — High purchasing power, quality-conscious market</li>
<li><strong>French</strong> (2.33%) — European and Canadian market</li>
</ul>
<h4>Tier 4 (Emerging Markets / Future Potential)</h4>
<ul>
<li><strong>Korean</strong> (1.48%) — Growing (+0.27%), well-developed gaming culture</li>
<li><strong>Traditional Chinese</strong> (1.39%) — Taiwan and Hong Kong market</li>
<li><strong>Thai</strong> (0.88%) — Growing Southeast Asian market</li>
</ul>
<h3>Phased Localization Strategy</h3>
<p>For indie developers, a phased approach like the following is realistic:</p>
<ol>
<li><strong>Phase 1</strong>: Your native language + English (developer's language + global standard)</li>
<li><strong>Phase 2</strong>: Add Simplified Chinese (target the largest growing market)</li>
<li><strong>Phase 3</strong>: Add Russian and Korean (capture growing major markets)</li>
<li><strong>Phase 4</strong>: Gradually add other Tier 2-3 languages (Spanish, Portuguese, etc.)</li>
</ol>
<h2>Practical Implementation Tips {#practical-tips}</h2>
<h3>Font Support</h3>
<p>Chinese, Korean, Japanese, and Thai each require dedicated fonts. Check in advance whether web fonts or system fonts will work, or if you need to bundle font files with your application.</p>
<h3>Text Length Variation</h3>
<p>Text length varies significantly across languages. UI design needs to be flexible enough to handle these differences.</p>
<ul>
<li><strong>Tend shorter</strong>: Japanese, Chinese, Korean (logographic characters)</li>
<li><strong>Tend longer</strong>: German, Russian (compound words and grammatical cases)</li>
</ul>
<p>Test your buttons and text boxes with longer text strings in advance to avoid overflow issues.</p>
<h3>Documentation (FAQ)</h3>
<p>Localizing your app means you may receive inquiries in various languages. Responding to every question individually is a huge drain on time. Prepare anticipated questions and answers as documentation, translate them into supported languages, and publish them. This lets users self-serve and significantly reduces developer support burden.</p>
<h3>Right-to-Left (RTL) Languages</h3>
<p>Languages like Arabic and Hebrew are written right-to-left (RTL) and may require flipping the entire UI layout. While these aren't in the top tiers currently, keep it in mind if you might support them in the future.</p>
<h2>Designing for Localization from Day One {#conclusion}</h2>
<p>This analysis shows that ideally supporting the top 10 ranked languages would let you reach the vast majority of Steam users.</p>
<p>With generative AI tools like ChatGPT, Claude, and Gemini now available, high-quality translation is more accessible than ever. For UI and system text, AI translation alone can deliver quite good results. Of course, for story text and character dialogue that affect immersion, professional translators or native speaker review is still recommended.</p>
<p>The key takeaway is to <strong>design for localization from the very start of development</strong>.</p>
<p>Instead of hardcoding text directly in your program, load it from per-language CSV or JSON files. This makes adding new languages later much easier.</p>
<p>There are considerations like font preparation and UI design that prevents layout breakage from text length changes, but the implementation barrier is steadily dropping thanks to AI advances. If you're aiming for the global market, I encourage you to consider localization seriously.</p>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/d798e490248249208ce115800b86d8e4/20250704-1-Steam202506%E8%A8%80%E8%AA%9E%E5%88%A5%E3%83%A6%E3%83%BC%E3%82%B6%E3%83%BC%E6%95%B0.png" medium="image"/></item><item><title><![CDATA[13 Essential Godot Features Every 2D Game Developer Should Know (With Recommended Course)]]></title><description><![CDATA[Essential Godot features for 2D game beginners. Covers pause control, knockback mechanics, auto-tiling, built-in functions, and other key concepts for practical game development.]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/godot-2d-game-practical-techniques/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/godot-2d-game-practical-techniques/</guid><category><![CDATA[godot]]></category><category><![CDATA[learning]]></category><pubDate>Fri, 20 Jun 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="./images/20250613_02_Godot-Udemy%E3%82%B3%E3%83%BC%E3%82%B902.webp" alt=""></p>
<p>In my <a href="/en/blog/gamedev/godot-for-unity-developer/">previous article</a>, I covered the basics of learning Godot. This time, I completed the Udemy course "<a href="https://trk.udemy.com/Oex4on">Godot4: Build a 2D Action-Adventure Game</a>" and tackled building a practical 2D action-adventure game.</p>
<p><img src="/48132ef7a853ca7593bf0e7303d67155/20250620-1-Godot4-Build-a-2D-Action-Adventure-Game-%E4%BF%AE%E4%BA%86%E8%A8%BC%E6%9B%B8.png" alt=""></p>
<p>The course systematically covers everything from basic player controls to NPC dialogue, puzzles, and combat systems. This article distills the key Godot features and concepts I learned along the way.</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#course-overview">Course Overview</a></li>
<li><a href="#godot-deep-dive">Key Godot Features &#x26; Concepts</a>
<ul>
<li><a href="#process-modes">Process Mode: Pause Control</a></li>
<li><a href="#motion-modes">CharacterBody2D Motion Mode</a></li>
<li><a href="#input-map">InputMap: Key Binding Management</a></li>
<li><a href="#input-processing">Input Processing Methods</a></li>
<li><a href="#movement-functions">move_and_slide vs move_toward</a></li>
<li><a href="#groups">Groups: Flexible Object Identification</a></li>
<li><a href="#collision-system">Collision Layers/Masks</a></li>
<li><a href="#terrains">Terrains: Auto-Tiling</a></li>
<li><a href="#marker2d">Marker2D: The Go-To Manager Node</a></li>
<li><a href="#scene-inheritance">Editable Children &#x26; Scene Inheritance</a></li>
<li><a href="#autoload">Autoload: Cross-Scene Data Management</a></li>
<li><a href="#visual-effects">White Flash with modulate</a></li>
<li><a href="#animation-systems">AnimatedSprite2D vs AnimationPlayer</a></li>
</ul>
</li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Course Overview {#course-overview}</h2>
<p><img src="/6c7c7b963cc6743257b455a2face5022/20250620-2-Godot4-Build-a-2D-Action-Adventure-Game.png" alt=""></p>
<p>"<a href="https://trk.udemy.com/Oex4on">Godot4: Build a 2D Action-Adventure Game</a>" builds a 2D action-adventure game from scratch. As you work through it, questions like these get answered naturally:</p>
<ul>
<li>How do you implement area transitions?</li>
<li>How do you handle pushable objects?</li>
<li>How do you persist opened treasure chest states?</li>
<li>How do you manage NPC dialogue?</li>
<li>How do you implement a door that requires multiple switches?</li>
<li>How do you create a damage flash effect?</li>
</ul>
<p>The curriculum covers 8-directional player movement, Terrains-based auto-tiling, Y-Sort for draw order, RigidBody2D physics puzzles, dialogue systems, Autoload for data persistence, and enemy AI with knockback combat.</p>
<p>It's a step up in difficulty from the beginner course I covered <a href="/en/blog/gamedev/godot-for-unity-developer/">previously</a>, but packed with knowledge you actually need for real game development. Udemy runs sales frequently — add it to your wishlist and grab it on sale.</p>
<p>:::post-link{url="<a href="https://trk.udemy.com/Oex4on">https://trk.udemy.com/Oex4on</a>" text="Godot4: Build a 2D Action-Adventure Game (Udemy)"}</p>
<p>:::ad</p>
<h2>Key Godot Features &#x26; Concepts {#godot-deep-dive}</h2>
<p>Here's a deep dive into the features I found most important during the course.</p>
<h3>Process Mode: Pause Control {#process-modes}</h3>
<p><img src="/1ec4a7ad68f1d39a7e2065226dd31075/20250620-3-Godot-Node-Process.png" alt=""></p>
<p>"Freeze enemies and the player during NPC dialogue, but keep the dialogue window interactive" — a common game development requirement. Godot handles this with per-node Process Mode settings.</p>
<ul>
<li><strong>Pausable (default)</strong>: Stops when <code>get_tree().paused = true</code>. For game world objects like players and enemies.</li>
<li><strong>Always</strong>: Ignores pause state. For active NPCs, UI, and background music.</li>
<li><strong>When Paused</strong>: Only runs during pause. For pause menu UI.</li>
</ul>
<p>In Unity, you'd set <code>Time.timeScale = 0</code> and manually use <code>Time.unscaledDeltaTime</code> in individual scripts. Godot's approach of setting Process Mode per node is far more elegant.</p>
<p>Implementation: pausing the game during dialogue</p>
<p><img src="/a52d52be285993f1653fe23fcc4ccd51/20250620-4-Godot-Node-Process-DIalogue_1.webp" alt=""></p>
<p><em>The scene pauses during conversation and resumes when dialogue ends</em></p>
<pre><code class="language-gdscript"># NPC.gd
# Set this NPC's Process Mode to "Always" in the inspector

func _process(delta):
    if Input.is_action_just_pressed("interact") and can_talk:
        if is_dialog_active():
            close_dialog()
            get_tree().paused = false
        else:
            open_dialog()
            get_tree().paused = true
</code></pre>
<p>The NPC keeps running while the rest of the game pauses, allowing safe dialogue open/close handling.</p>
<p>:::ad</p>
<h3>CharacterBody2D Motion Mode {#motion-modes}</h3>
<p><img src="/89348029df3ef2a43cd3e8d383636730/20250620-5-CharacterBody2D-MotionMode.png" alt=""></p>
<p><code>CharacterBody2D</code> has a Motion Mode setting that switches physics behavior based on your game's genre. Always check this at project start.</p>
<ul>
<li><strong>Grounded (default)</strong>: Gravity applies, <code>is_on_floor()</code> works. For platformers and side-scrollers.</li>
<li><strong>Floating</strong>: No gravity, no floor concept. For top-down action games and shooters.</li>
</ul>
<pre><code class="language-gdscript"># Player.gd (Motion Mode set to "Floating")
extends CharacterBody2D

@export var speed: float = 200.0

func _physics_process(delta):
    var direction = Input.get_vector("move_left", "move_right", "move_up", "move_down")
    velocity = direction * speed
    move_and_slide()
</code></pre>
<p>Wrong settings cause unexpected gravity or broken floor detection — an easy mistake to make.</p>
<p>:::ad</p>
<h3>InputMap: Key Binding Management {#input-map}</h3>
<p><img src="/7a26659119f2bcd4e9c57ef57d2607e0/20250620-6-InputMap.png" alt="Godot InputMap"></p>
<p>In Unity, it's tempting to write <code>Input.GetKey(KeyCode.A)</code> directly. Godot's Input Map (Project Settings → Input Map) takes a better approach: define action names and bind keys to them. Your code uses abstract names like <code>"move_left"</code>, making it more readable and easy to add key remapping.</p>
<pre><code class="language-gdscript">func _process(delta):
    # One-shot input (moment of press)
    if Input.is_action_just_pressed("interact"):
        open_chest()

    # Continuous input (while held)
    if Input.is_action_pressed("dash"):
        speed = DASH_SPEED
    else:
        speed = NORMAL_SPEED

    # Get 4-directional input as a normalized Vector2 (very handy)
    var direction = Input.get_vector("move_left", "move_right", "move_up", "move_down")
    velocity = direction * speed
    move_and_slide()
</code></pre>
<p><code>Input.get_vector()</code> returns a normalized <code>Vector2</code> from four actions, letting you write top-down movement in a single line.</p>
<p>:::ad</p>
<h3>Input Processing Methods {#input-processing}</h3>
<p>Once you've defined actions in InputMap, choose the right method based on input state.</p>
<p>:::post-link{url="<a href="https://docs.godotengine.org/en/stable/classes/class_input.html">https://docs.godotengine.org/en/stable/classes/class_input.html</a>" text="Input Class Reference (Godot Docs)"}</p>
<table>
<thead>
<tr>
<th>Use Case</th>
<th>Method</th>
<th>Examples</th>
</tr>
</thead>
<tbody>
<tr>
<td>One-shot on press</td>
<td><code>is_action_just_pressed()</code></td>
<td>Jump, attack, menu toggle</td>
</tr>
<tr>
<td>While held</td>
<td><code>is_action_pressed()</code></td>
<td>Movement, dash, charge</td>
</tr>
<tr>
<td>On release</td>
<td><code>is_action_just_released()</code></td>
<td>Fire charged attack</td>
</tr>
<tr>
<td>Analog (0.0–1.0)</td>
<td><code>get_action_strength()</code></td>
<td>Gamepad trigger</td>
</tr>
</tbody>
</table>
<pre><code class="language-gdscript">func _process(delta):
    # One-shot actions
    if Input.is_action_just_pressed("jump"):
        if is_on_floor():
            velocity.y = JUMP_VELOCITY

    if Input.is_action_just_pressed("attack"):
        perform_attack()

    # Continuous actions
    if Input.is_action_pressed("dash"):
        current_speed = dash_speed
    else:
        current_speed = normal_speed

    # Charge: hold to build, release to fire
    if Input.is_action_pressed("charge"):
        charge_power += charge_rate * delta
        charge_power = min(charge_power, max_charge)

    if Input.is_action_just_released("charge"):
        fire_charged_shot(charge_power)
        charge_power = 0.0
</code></pre>
<p>A common mistake: using <code>is_action_pressed()</code> for shooting fires a bullet every frame. Always use <code>is_action_just_pressed()</code> for one-shot actions.</p>
<p>:::ad</p>
<h3>move_and_slide vs move_toward {#movement-functions}</h3>
<p><img src="/1ed07252c30fcaa6b53d8159476479d7/20250620-7-Godot-Knockback-Move_toward_1.webp" alt="Knockback requires move_toward"></p>
<p>Two essential movement functions in Godot:</p>
<ul>
<li><strong><code>move_and_slide()</code></strong>: The workhorse of <code>CharacterBody2D</code>. Moves based on current <code>velocity</code> and automatically handles wall/floor collisions.</li>
<li><strong><code>move_toward(target, delta)</code></strong>: Gradually shifts a value toward a target. Used for smooth acceleration and deceleration.</li>
</ul>
<p>The distinction becomes critical when implementing knockback. Here's a real problem I hit during the course:</p>
<p>Directly assigning <code>velocity</code> means the moment a player inputs movement during knockback, the knockback effect vanishes instantly. Using <code>move_toward</code> for gradual velocity changes lets knockback decay naturally into normal movement.</p>
<pre><code class="language-gdscript"># Direct assignment — knockback disappears instantly
func move_player():
    var move_vector = Input.get_vector("move_left", "move_right", "move_up", "move_down")
    velocity = move_vector * move_speed  # Overwrites knockback velocity

# move_toward — knockback decays smoothly
@export var acceleration: float = 500.0

func move_player():
    var move_vector = Input.get_vector("move_left", "move_right", "move_up", "move_down")
    var target_velocity = move_vector * move_speed
    velocity = velocity.move_toward(target_velocity, acceleration * delta)

# Knockback
func apply_knockback(direction: Vector2, strength: float):
    velocity += direction * strength
</code></pre>
<p>The second argument of <code>move_toward</code> specifies how much to approach the target per frame. Using <code>acceleration * delta</code> ensures frame-rate-independent smooth transitions.</p>
<p>:::ad</p>
<h3>Groups: Flexible Object Identification {#groups}</h3>
<p><img src="/0cda0501e8b92a2c6b1bbb08f1aff61c/20250620-8-Node-Groups.webp" alt=""></p>
<p>"Did the attack hit an enemy or a pushable object?" — Godot uses groups to answer this.</p>
<p>Similar to Unity's Tag system, but with a crucial difference: Unity allows only one Tag per GameObject. Godot groups support multiple assignments, so an object can be both "enemies" and "damageable" simultaneously.</p>
<p>Setup is simple: select a node → Inspector → Node tab → Groups → add a group name. In code, use <code>is_in_group()</code> to check.</p>
<pre><code class="language-gdscript"># Connected to player's sword Area2D
func _on_sword_area_body_entered(body: Node2D):
    if body.is_in_group("enemies"):
        body.take_damage(attack_power)
    elif body.is_in_group("pushable"):
        pass  # Handle pushable object
</code></pre>
<p>:::ad</p>
<h3>Collision Layers/Masks {#collision-system}</h3>
<p><img src="/570771fe74938039b9d63decbef52c05/20250620-9-CollisionLayer.png" alt=""></p>
<p>Without proper collision organization, you'll get "the player's sword hits allies" or "enemies get stuck on each other." Godot's Collision Layers/Masks prevent this.</p>
<ul>
<li><strong>Layer</strong>: Which collision layer this object exists on</li>
<li><strong>Mask</strong>: Which layers this object checks for collisions</li>
</ul>
<p>For example, with Player (Layer 1), Enemies (Layer 2), and Weapons (Layer 3):</p>
<table>
<thead>
<tr>
<th>Object</th>
<th>Layer</th>
<th>Mask</th>
<th>Reason</th>
</tr>
</thead>
<tbody>
<tr>
<td>Player</td>
<td>1</td>
<td>2</td>
<td>Takes damage from enemies</td>
</tr>
<tr>
<td>Enemy</td>
<td>2</td>
<td>1</td>
<td>Only contacts player (enemies pass through each other)</td>
</tr>
<tr>
<td>Player's weapon</td>
<td>3</td>
<td>2</td>
<td>Only hits enemies (doesn't collide with player)</td>
</tr>
</tbody>
</table>
<pre><code class="language-gdscript"># Weapon's Mask only detects enemies, so only enemies trigger this
func _on_weapon_area_body_entered(body):
    if body.is_in_group("enemies"):
        body.take_damage(attack_power)
</code></pre>
<p><img src="/12990fb13dc09ab98291a9dadccd21d6/20250620-10-CollisionLayerName.png" alt=""></p>
<p>Name your layers in Project Settings → Layer Names to make inspector setup much clearer.</p>
<p>:::ad</p>
<h3>Terrains: Auto-Tiling {#terrains}</h3>
<p><img src="/0b310fe96d946acd6caf5e2aa8a84265/20250620-11-Terrains%E6%A9%9F%E8%83%BD.webp" alt=""></p>
<p>Godot's Terrains feature makes auto-tiling surprisingly easy. If you've used Rule Tiles in Unity, you'll appreciate how intuitive this is.</p>
<p><img src="/c66f8a6a0460fe10e05f2dc1fc60b779/20250620-12-Terrains-Paint.png" alt=""></p>
<p>In the <code>TileSet</code> resource's Terrains tab, you visually paint which edges of each tile connect to which terrain type. Then the <code>TileMap</code> editor's brush tool automatically selects the right tile based on boundaries.</p>
<p>Handy features include: random brush (dice icon) for selecting from multiple tiles, probability control for tile frequency, and "F" key for batch-applying collision to all tiles.</p>
<p>:::ad</p>
<h3>Marker2D: The Go-To Manager Node {#marker2d}</h3>
<p>In Unity, the standard practice was attaching scripts to empty GameObjects to create "managers." In Godot, <strong>Marker2D</strong> fills this role.</p>
<p>Marker2D holds only position data (Transform) — the lightest 2D node available. No rendering, no physics overhead, making it perfect for scene management scripts like GameManager or PuzzleManager. Unlike Unity's empty GameObjects, it shows a crosshair marker in the editor for easy visibility.</p>
<p>:::ad</p>
<h3>Editable Children &#x26; Scene Inheritance {#scene-inheritance}</h3>
<p>Two approaches for creating variations from a base scene, each suited to different use cases:</p>
<ul>
<li><strong>Editable Children</strong>: Right-click an instance → "Editable Children" to directly modify its internals (sprites, colliders, etc.). Changes only apply to that specific placement. Great for mass-producing NPCs that differ only in appearance or dialogue.</li>
<li><strong>Scene Inheritance</strong>: Create a new scene (Shopkeeper.tscn) that inherits from a base (BaseNPC.tscn). Add unique nodes and scripts while keeping parent functionality. Best for functionally different variants like a merchant NPC that can "talk" and "trade."</li>
</ul>
<p><img src="/ed829aa20a87c40a9ab62aa2176b093f/20250620-13-EditableChildren.png" alt="Editable Children nodes appear in yellow"></p>
<table>
<thead>
<tr>
<th>Aspect</th>
<th>Editable Children</th>
<th>Scene Inheritance</th>
</tr>
</thead>
<tbody>
<tr>
<td>Best for</td>
<td>Villager A, B, etc. (visual/dialogue differences)</td>
<td>Merchant, Blacksmith (unique functionality)</td>
</tr>
<tr>
<td>Reusability</td>
<td>Low (local to that scene)</td>
<td>High (inherited scene usable everywhere)</td>
</tr>
<tr>
<td>Management</td>
<td>Simple (base scene only)</td>
<td>Structured (functionality split across files)</td>
</tr>
</tbody>
</table>
<p>:::ad</p>
<h3>Autoload: Cross-Scene Data Management {#autoload}</h3>
<p>"Open a treasure chest, leave the area, come back — and it's closed again." The classic scene-switching data loss problem is solved in Godot with <strong>Autoload</strong> — equivalent to Unity's <code>DontDestroyOnLoad</code> + singleton pattern.</p>
<p>Register a script as Autoload in Project Settings, and it loads automatically at game start, accessible globally from any scene.</p>
<p><img src="/aaef1d7db4fa7da8977180c4213f3d54/20250620-14-Autoload.png" alt=""></p>
<pre><code class="language-gdscript"># GameManager.gd (registered as AutoLoad)
extends Node

var opened_chests: Array[String] = []
var player_hp: int = 3
var player_spawn_position: Vector2
</code></pre>
<pre><code class="language-gdscript"># TreasureChest.gd
extends StaticBody2D

@export var chest_id: String  # Set a unique ID like "forest_chest_01"

func _ready():
    if GameManager.opened_chests.has(chest_id):
        play_open_animation(false)

func open_chest():
    GameManager.opened_chests.append(chest_id)
    play_open_animation(true)
</code></pre>
<p>Player HP, scores, inventory, quest progress — any data that needs to survive scene changes goes through Autoload.</p>
<p>:::ad</p>
<h3>White Flash with modulate {#visual-effects}</h3>
<p><img src="/1ed07252c30fcaa6b53d8159476479d7/20250620-7-Godot-Knockback-Move_toward_1.webp" alt="Knockback requires move_toward"></p>
<p>A brief white flash on damage is incredibly effective player feedback. In Godot, it's trivial to implement using the modulate property.</p>
<p><code>modulate</code> is a color value multiplied against a node and all its descendants. Default is white (1, 1, 1). Higher values = brighter, lower = darker. Changing CharacterBody2D's modulate automatically affects its child AnimatedSprite2D — no individual sprite manipulation needed.</p>
<pre><code class="language-gdscript"># Player.gd
func take_damage(amount):
    # ...damage calculation...
    flash_effect()

func flash_effect():
    modulate = Color(2, 2, 2)  # Flash white
    await get_tree().create_timer(0.1).timeout  # Wait 0.1 seconds
    modulate = Color(1, 1, 1)  # Restore original
</code></pre>
<p><code>await get_tree().create_timer(0.1).timeout</code> is a one-liner for temporary delays without adding timer nodes.</p>
<p>:::ad</p>
<h3>AnimatedSprite2D vs AnimationPlayer {#animation-systems}</h3>
<p>Godot has two main 2D animation systems, each suited to different needs.</p>
<h4>AnimatedSprite2D</h4>
<p><img src="/ba1b7e59a402de97b2d64a029c514cc9/20250620-15-AnimatedSprite2D.webp" alt=""></p>
<p>Dedicated to sprite frame animation. Best for walk cycles, attacks, idles — anything that's just swapping sprite sheet frames.</p>
<pre><code class="language-gdscript">if velocity.x > 0:
    $AnimatedSprite2D.play("move_right")
elif velocity.x &#x3C; 0:
    $AnimatedSprite2D.play("move_left")
else:
    $AnimatedSprite2D.stop()
</code></pre>
<h4>AnimationPlayer</h4>
<p><img src="/aed1f930237a4bc64fe862e8674a7055/20250620-16-AnimationPlayer.webp" alt=""></p>
<p>A general-purpose animation system controlling position, rotation, scale, and any property simultaneously. Similar to Unity's Animator — used for sword swings, UI animations, and camera work.</p>
<pre><code class="language-gdscript">func attack():
    var player_animation: String = $AnimatedSprite2D.animation
    if player_animation == "move_right":
        $AnimatedSprite2D.play("attack_right")
        $AnimationPlayer.play("attack_right")  # Controls sword position &#x26; angle
</code></pre>
<h4>Selection Guide</h4>
<table>
<thead>
<tr>
<th>Animation Content</th>
<th>Recommended System</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sprite frame switching only</td>
<td>AnimatedSprite2D</td>
</tr>
<tr>
<td>Position/rotation/scale changes</td>
<td>AnimationPlayer</td>
</tr>
<tr>
<td>Multi-object synchronization</td>
<td>AnimationPlayer</td>
</tr>
<tr>
<td>Complex state transitions</td>
<td>AnimationTree</td>
</tr>
</tbody>
</table>
<p>In practice, you'll typically combine AnimatedSprite2D for basic character animation with AnimationPlayer for weapon and effect animations.</p>
<p>:::ad</p>
<h2>Conclusion {#conclusion}</h2>
<p>This course reinforced how consistently Godot's design philosophy holds together. Process Mode, Motion Mode, Collision Layers — concepts are unified and solutions to common problems are straightforward.</p>
<p>What stands out most is how many "I wish Unity had this built-in" features come standard: Terrains, Y-Sort, <code>move_and_slide()</code>, and more. For Unity developers, Godot offers a compelling combination of low learning curve and development comfort.</p>
<p>:::post-link{url="<a href="https://trk.udemy.com/Oex4on">https://trk.udemy.com/Oex4on</a>" text="Godot4: Build a 2D Action-Adventure Game (Udemy)"}</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/59200aedc8d0c070a26cf5f9f88d6743/20250620-%E3%80%90Godot%E3%80%91%E3%81%93%E3%82%8C%E3%81%8B%E3%82%89Godot-Engine%E3%82%92%E5%A7%8B%E3%82%81%E3%82%8B2D%E3%82%B2%E3%83%BC%E3%83%A0%E9%96%8B%E7%99%BA%E8%80%85%E3%81%8C%E7%9F%A5%E3%81%A3%E3%81%A6%E3%81%8A%E3%81%8F%E3%81%B9%E3%81%8D13%E3%81%AE%E9%87%8D%E8%A6%81%E6%A9%9F%E8%83%BD%E3%81%A8%E3%82%AA%E3%82%B9%E3%82%B9%E3%83%A1%E6%95%99%E6%9D%90.png" medium="image"/></item><item><title><![CDATA[A Unity Developer's Guide to Godot: Comparison Table and Recommended Resources]]></title><description><![CDATA[A Unity developer's learning journal on Godot Engine. Includes a Unity-to-Godot comparison table, recommended courses, and key takeaways from first-time Godot experience.]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/godot-for-unity-developer/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/godot-for-unity-developer/</guid><category><![CDATA[godot]]></category><category><![CDATA[learning]]></category><pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="/b98a8feef9781d72c3c987f8c64dd24d/20250613_01_Godot-%E3%83%87%E3%83%A2%E3%82%B2%E3%83%BC%E3%83%A0.webp" alt="">
<em>Godot's official demo project (<a href="https://docs.godotengine.org/en/stable/getting_started/first_2d_game/">Your first 2D game</a>)</em></p>
<p>Godot Engine is an MIT-licensed, open-source game engine first released in 2014. It surged in popularity after Unity's Runtime Fee controversy in 2023, as many developers began exploring it as an alternative.</p>
<p><img src="/0d81a8243d740411ea7d495c318b51f0/20250613_02_Godot-Udemy%E3%82%B3%E3%83%BC%E3%82%B902.webp" alt="">
<em>A Udemy course I'm currently studying: "<a href="https://trk.udemy.com/55NY9j">Godot4: Build a 2D Action-Adventure Game</a>"</em></p>
<p>This article covers everything a Unity developer needs to get started with Godot: a Udemy course review, first impressions from hands-on experience, a Unity-to-Godot concept mapping table, and side-by-side code comparisons.</p>
<p>:::post-link{url="<a href="https://godotengine.org/">https://godotengine.org/</a>" text="Godot Engine Official Website"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#godot-overview">Godot Engine Q&#x26;A</a></li>
<li><a href="#udemy-godot-course">Udemy Course Review</a></li>
<li><a href="#godot-impressions">What Impressed Me About Godot</a></li>
<li><a href="#unity-developers-api">For Unity Developers: Comparison Table</a></li>
<li><a href="#unity-developers-code">For Unity Developers: Code Comparison</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Godot Engine Q&#x26;A {#godot-overview}</h2>
<p><img src="/603933730c8f88496e1991167f729df8/20250613_03_Godot%E3%81%A3%E3%81%A6%E3%81%AA%E3%81%AB%EF%BC%9F.png" alt=""></p>
<p>Before diving into Godot, I had a bunch of questions. "What's actually special about Godot?" "Isn't the node-based system complicated?" "Is GDScript even worth learning?" — if you're asking the same things, you're not alone. Here are my answers after actually getting my hands on the engine.</p>
<p><strong>Q. What is Godot Engine?</strong>
An open-source game engine released in 2014. Fully free under the MIT license with zero royalties. The engine core is written in C++.</p>
<p><strong>Q. First impression compared to Unity and Unreal Engine?</strong>
It's incredibly light. The entire engine is about 149MB as of v4.4.1. You can start developing in Godot before Unity's installer even finishes loading. The trade-off is that community resources are still growing.</p>
<p><strong>Q. What notable games were made with Godot?</strong>
"Backpack Battles," "Buckshot Roulette," and "Unrailed 2: Back on Track," among others. Some surprisingly well-known titles in there. Check the <a href="https://godotengine.org/showcase/">official Showcase page</a> for a full list — the annual highlight videos are especially worth watching.</p>
<p><strong>Q. What is the node-based system?</strong>
A system where you build functionality through a hierarchy of nodes. For example, a physics-enabled character would have a CharacterBody2D node with AnimatedSprite2D (animation) and CollisionShape2D (collision) as children. It's similar to attaching components in Unity, so Unity developers should find it familiar.</p>
<p><strong>Q. Doesn't a node-based system limit flexibility?</strong>
You can extend nodes with custom classes, so the coding freedom is the same as Unity or Unreal Engine.</p>
<p><strong>Q. What is GDScript?</strong>
Godot's own Python-based scripting language. While C# is also supported, starting with GDScript is far more efficient for learning. The syntax is simple and the engine integration runs deep.</p>
<p>:::ad</p>
<h2>Udemy Course Review {#udemy-godot-course}</h2>
<p>When learning a new engine, I like to complement official documentation with multiple video tutorials. This time I took the "<a href="https://trk.udemy.com/nXQoM9">Godot Engineで気軽に2Dゲームを作ろう</a>" course on Udemy and completed it in about 6.5 hours.</p>
<p><img src="/4eaa4758da3b260a279877ac55228e14/20250613_04_Udemy%E4%BF%AE%E4%BA%86%E8%A8%BC%E6%9B%B8.jpg" alt=""></p>
<p>The course starts with fundamentals (project setup, understanding nodes and scenes, Windows/Web builds) then moves into building an original 2D game. Each video is 1–5 minutes, keeping things snappy and focused.</p>
<p><img src="/fccd6ce2ee50d024dc4ccecbae4d9ca4/20250613_05_Godot-Engine%E3%81%A7%E6%B0%97%E8%BB%BD%E3%81%AB2D%E3%82%B2%E3%83%BC%E3%83%A0%E3%82%92%E4%BD%9C%E3%82%8D%E3%81%86.webp" alt="">
<em>The completed game from the course</em></p>
<p>The highlight was procedural maze generation using a recursive backtracker algorithm. You also implement UI, data saving, a shooting system, and enemy AI — building a full game cycle from title screen to area selection to maze and back.</p>
<p><img src="/5092f5b98ceb3bc1295b287484c5e3c3/20250613_06_%E7%A9%B4%E6%8E%98%E3%82%8A%E6%B3%95%E3%82%A2%E3%83%AB%E3%82%B4%E3%83%AA%E3%82%BA%E3%83%A0.png" alt="">
<em>Recursive backtracker algorithm execution log</em></p>
<p>Quality Godot courses in Japanese are still rare, making this one particularly valuable. I'd recommend using it to get the big picture before tackling more advanced English courses. Udemy runs sales frequently, so add it to your wishlist.</p>
<p>As a learning tip: asking ChatGPT or Claude "What's the Unity/Unreal equivalent of this?" while studying Godot dramatically speeds up comprehension. Mapping <code>@export</code> → <code>SerializeField</code> or <code>get_tree().change_scene_to_file()</code> → <code>SceneManager.LoadScene</code> connects new concepts to what you already know, and the difference in learning efficiency is night and day.</p>
<p>:::post-link{url="<a href="https://trk.udemy.com/nXQoM9">https://trk.udemy.com/nXQoM9</a>" text="Godot Engineで気軽に2Dゲームを作ろう (Udemy)"}</p>
<p>:::ad</p>
<h2>What Impressed Me About Godot {#godot-impressions}</h2>
<p><img src="/6ffdffa973bd1a65261aa229a54e4402/20250613_07_AnimatedSprte2D.png" alt=""></p>
<p>Going through this course gave me a genuine appreciation for Godot's design philosophy. It takes a distinctly different approach from Unity/Unreal Engine, and especially for 2D development, I kept thinking "this is exactly what I wanted."</p>
<ul>
<li><strong>Nodes and signals feel intuitive</strong>: These are Godot's two core concepts. Nodes define structure, signals handle communication. Once you get used to the combo, it flows more naturally than Unity's GetComponent + UnityEvent pattern.</li>
<li><strong>Lightweight = comfortable</strong>: Opening a Unity project can take tens of seconds to minutes. Godot? Click the .exe, wait a few seconds, and you're back in. Compilation is near-instant too, which dramatically lowers the barrier to "just trying something."</li>
<li><strong>Scene reusability</strong>: The "scene = Prefab + Scene" concept was confusing at first, but in practice it feels just like Unity. Need a GameManager? Attach a script to a Marker2D, make it a scene, set it as AutoLoad for singleton behavior. Done.</li>
<li><strong>Native 2D support</strong>: TileMap, AnimatedSprite2D, and other 2D-specific nodes are built in from the start. Where Unity's 2D is "2D on top of a 3D engine," Godot's 2D is native.</li>
<li><strong>Sprite sheet workflow</strong>: In Unity, you slice sprites and configure each frame individually. In Godot, you pick images from a grid and they're instantly arranged on the timeline. A huge quality-of-life win for 2D developers.</li>
<li><strong>Tileset management</strong>: Collision shapes are set visually, and Physics Layers let you efficiently manage per-tile collision. Fewer steps than Unity's tilemap setup.</li>
</ul>
<p>:::ad</p>
<h2>For Unity Developers: Comparison Table {#unity-developers-api}</h2>
<p><img src="/0c97dc71e5e0f5c153e6c381222653b7/20250613_08-Godot-Unity%E5%AF%BE%E5%BF%9C%E8%A1%A8.png" alt=""></p>
<p>When learning Godot as a Unity developer, the first question is always "What's the Godot equivalent of X?" Scanning through this table before you start studying will make everything click faster.</p>
<table>
<thead>
<tr>
<th>Godot</th>
<th>Unity (Nearest Equivalent)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Scene (.tscn)</td>
<td>Prefab + Scene</td>
</tr>
<tr>
<td><code>get_tree().change_scene_to_file()</code></td>
<td><code>SceneManager.LoadScene()</code></td>
</tr>
<tr>
<td><code>$</code> syntax / <code>get_node()</code></td>
<td><code>GameObject.Find()</code> / <code>FindObjectOfType()</code></td>
</tr>
<tr>
<td><code>get_parent()</code></td>
<td><code>transform.parent</code></td>
</tr>
<tr>
<td><code>_ready()</code></td>
<td><code>Start()</code></td>
</tr>
<tr>
<td><code>_process(delta)</code></td>
<td><code>Update()</code></td>
</tr>
<tr>
<td><code>_physics_process(delta)</code></td>
<td><code>FixedUpdate()</code></td>
</tr>
<tr>
<td><code>CanvasLayer</code></td>
<td>Canvas (Sorting Order)</td>
</tr>
<tr>
<td><code>Control</code></td>
<td>UI GameObject</td>
</tr>
<tr>
<td><code>Label</code></td>
<td>Text / TextMeshPro</td>
</tr>
<tr>
<td>Signal</td>
<td>UnityEvent / C# Event</td>
</tr>
<tr>
<td><code>@export</code></td>
<td><code>[SerializeField]</code></td>
</tr>
<tr>
<td><code>@onready</code></td>
<td>Initialization in <code>Awake()</code></td>
</tr>
<tr>
<td>AutoLoad</td>
<td><code>DontDestroyOnLoad</code></td>
</tr>
<tr>
<td><code>load()</code> / <code>preload()</code></td>
<td><code>Resources.Load()</code></td>
</tr>
</tbody>
</table>
<p>These are approximate mappings. The exact behavior differs, so try things hands-on to feel the differences for yourself.</p>
<p>:::ad</p>
<h2>For Unity Developers: Code Comparison {#unity-developers-code}</h2>
<p>With the concept mapping in hand, let's compare actual code. GDScript uses Python-like indentation-based syntax and tends to be more concise than C#. The areas most likely to trip up Unity developers are type declaration style, the <code>$</code> syntax for node access, and the signal system.</p>
<h3>Example 1: Exposing Variables (<code>@export</code> vs <code>[SerializeField]</code>)</h3>
<p>In game development, you constantly need to tweak values in the inspector — movement speed, jump height, spawn rates. Rather than hardcoding these, you want them editable in the editor. Unity uses <code>[SerializeField]</code> for this; Godot uses the <code>@export</code> annotation.</p>
<p>Initialization code that goes in Unity's <code>Start()</code> is written in Godot's <code>_ready()</code>.</p>
<p><strong>Godot (GDScript)</strong></p>
<pre><code class="language-gdscript"># Player.gd
extends Node2D

@export var player_name: String = "Hero"
@export var speed: int = 100

# Equivalent to Unity's Start()
func _ready():
    print("Player Name: ", player_name)
    print("Initial Speed: ", speed)
</code></pre>
<p><strong>Unity (C#)</strong></p>
<pre><code class="language-csharp">// Player.cs
using UnityEngine;

public class Player : MonoBehaviour
{
    [SerializeField] private string playerName = "Hero";
    [SerializeField] private int speed = 100;

    // Equivalent to Godot's _ready()
    void Start()
    {
        Debug.Log("Player Name: " + playerName);
        Debug.Log("Initial Speed: " + speed);
    }
}
</code></pre>
<p>Per-frame logic goes in <code>_process(delta)</code> (equivalent to Unity's <code>Update()</code>). The <code>delta</code> parameter serves the same role as <code>Time.deltaTime</code> — the elapsed time since the last frame, used for frame-rate-independent behavior.</p>
<h3>Example 2: Physics Processing (<code>_physics_process</code> vs <code>FixedUpdate</code>)</h3>
<p>Moving a character with arrow keys — the bread and butter of game development. In Unity, you attach a <code>Rigidbody2D</code>, call <code>GetComponent</code> in <code>Start()</code> to cache the reference, then set <code>velocity</code> in <code>FixedUpdate()</code>. That's three steps of setup.</p>
<p>Godot's <code>CharacterBody2D</code> has a built-in <code>velocity</code> property and <code>move_and_slide()</code> method, eliminating the component-fetching boilerplate entirely. Compare the two and Godot's simplicity stands out.</p>
<p><strong>Godot (GDScript)</strong></p>
<pre><code class="language-gdscript"># Character.gd
extends CharacterBody2D

const SPEED = 300.0

# Equivalent to Unity's FixedUpdate()
func _physics_process(delta):
    var direction = Input.get_axis("ui_left", "ui_right")
    velocity.x = direction * SPEED
    move_and_slide()  # Godot's convenient built-in function
</code></pre>
<p><strong>Unity (C#)</strong></p>
<pre><code class="language-csharp">// Character.cs
using UnityEngine;

[RequireComponent(typeof(Rigidbody2D))]
public class Character : MonoBehaviour
{
    [SerializeField] private float speed = 300.0f;
    private Rigidbody2D rb;

    void Start()
    {
        rb = GetComponent&#x3C;Rigidbody2D>();
    }

    // Equivalent to Godot's _physics_process()
    void FixedUpdate()
    {
        float moveInput = Input.GetAxis("Horizontal");
        rb.velocity = new Vector2(moveInput * speed, rb.velocity.y);
    }
}
</code></pre>
<h3>Example 3: Getting Nodes (<code>$</code> vs <code>GetComponent</code>)</h3>
<p>"Play a child's animation" or "disable a collider" — these operations come up constantly in game development. In Unity, the standard pattern is fetching a reference with <code>GetComponentInChildren&#x3C;T>()</code> or <code>transform.Find()</code>, then adding a null check before accessing it.</p>
<p>Godot's <code>$</code> syntax (syntactic sugar for <code>get_node()</code>) lets you write node paths directly, making the code remarkably concise. Because the tree structure is explicit, it's immediately clear which node you're accessing.</p>
<p><strong>Godot (GDScript)</strong></p>
<pre><code class="language-gdscript"># Player.gd
extends Node2D

func _ready():
    # Direct access to child node using $ syntax
    $AnimatedSprite2D.play("run")

    # Alternative using get_node()
    var collision_shape = get_node("CollisionShape2D")
    collision_shape.disabled = true
</code></pre>
<p><strong>Unity (C#)</strong></p>
<pre><code class="language-csharp">// Player.cs
using UnityEngine;

public class Player : MonoBehaviour
{
    void Start()
    {
        // Get component from child object
        Animator animator = GetComponentInChildren&#x3C;Animator>();
        if (animator != null)
        {
            animator.Play("run");
        }

        // Find child object by name
        Transform collisionShape = transform.Find("CollisionShape");
        if (collisionShape != null)
        {
            collisionShape.gameObject.SetActive(false);
        }
    }
}
</code></pre>
<h3>Example 4: Signals (<code>signal</code> vs <code>UnityEvent</code>)</h3>
<p>"When the player takes damage, update the HP display in the UI" — inter-object communication is needed everywhere in games. In Unity, you use <code>UnityEvent</code> or C# <code>event</code>, but managing references between senders and receivers can get messy.</p>
<p>Godot's signal system solves this elegantly. Two key features set it apart: <strong>you can wire connections between nodes via the editor GUI</strong>, and <strong>the sender never needs to know about the receiver — a truly decoupled design</strong>. Even when connecting via code, it's just one <code>.connect()</code> call.</p>
<p><strong>Godot (GDScript)</strong></p>
<pre><code class="language-gdscript"># Player.gd
extends Node2D

signal health_changed(new_health)

var health: int = 100

func take_damage(damage: int):
    health -= damage
    health_changed.emit(health)  # Emit signal
</code></pre>
<pre><code class="language-gdscript"># UI.gd
extends Control

func _ready():
    var player = get_node("../Player")
    player.health_changed.connect(_on_health_changed)

func _on_health_changed(new_health: int):
    print("Health is now ", new_health)
</code></pre>
<p><strong>Unity (C#)</strong></p>
<pre><code class="language-csharp">// Player.cs
using UnityEngine;
using UnityEngine.Events;

public class Player : MonoBehaviour
{
    [SerializeField] private UnityEvent&#x3C;int> onHealthChanged;

    private int health = 100;

    public void TakeDamage(int damage)
    {
        health -= damage;
        onHealthChanged?.Invoke(health);  // Invoke event
    }
}
</code></pre>
<pre><code class="language-csharp">// UI.cs
using UnityEngine;

public class UI : MonoBehaviour
{
    [SerializeField] private Player player;

    void Start()
    {
        player.onHealthChanged.AddListener(OnHealthChanged);
    }

    private void OnHealthChanged(int newHealth)
    {
        Debug.Log("Health is now " + newHealth);
    }
}
</code></pre>
<h3>Example 5: Scene Switching (<code>change_scene_to_file</code> vs <code>SceneManager.LoadScene</code>)</h3>
<p>Transitioning to the next stage when a level is cleared. In Unity, you pass a scene name to <code>SceneManager.LoadScene()</code>, but the scene must be registered in Build Settings first.</p>
<p>In Godot, you simply specify the path to a <code>.tscn</code> file directly. Without a registration step like Build Settings, prototyping feels noticeably more agile.</p>
<p><strong>Godot (GDScript)</strong></p>
<pre><code class="language-gdscript"># GameManager.gd
extends Node

func level_complete():
    print("Level complete!")
    get_tree().change_scene_to_file("res://scenes/Level2.tscn")
</code></pre>
<p><strong>Unity (C#)</strong></p>
<pre><code class="language-csharp">// GameManager.cs
using UnityEngine;
using UnityEngine.SceneManagement;

public class GameManager : MonoBehaviour
{
    public void LevelComplete()
    {
        Debug.Log("Level complete!");
        SceneManager.LoadScene("Level2");
    }
}
</code></pre>
<p>:::ad</p>
<h2>Conclusion {#conclusion}</h2>
<p>Godot is a game engine I'd especially recommend to 2D developers. It has clear advantages over Unity/Unreal Engine in sprite and tileset management — pure 2D games suit Godot, while Unity remains the better choice for lighting-heavy titles.</p>
<p>In terms of engine maturity, Unity and Unreal Engine are still overwhelmingly ahead in community resources, store ecosystems, and commercial track records. But Godot-made titles like "Backpack Battles" and "Buckshot Roulette" are steadily growing, and as developers who migrated during the Runtime Fee controversy ship their projects, Godot's share should grow.</p>
<p>The open-source angle is also uniquely compelling. Looking at how Blender gradually overtook the 3D industry once dominated by Maya and 3ds Max, Godot might follow a similar trajectory. Unity and Unreal Engine are free until substantial revenue thresholds, so Blender-speed adoption may not happen — but the room to grow as an industry safety net is real.</p>
<p>I'll continue working with Unity and Unreal Engine, but confirming that Godot is a viable third option was a real takeaway. If you're curious, give it a try.</p>
<div class="post-references">
<ul>
<li>
<a href="https://godotengine.org/" target="_blank" rel="noopener noreferrer">Godot Engine Official Website</a>
</li>
<li>
<a href="https://trk.udemy.com/nXQoM9" target="_blank" rel="sponsored noopener noreferrer">Godot Engineで気軽に2Dゲームを作ろう (Udemy)</a>
</li>
<li><a href="https://trk.udemy.com/55NY9j" target="_blank" rel="sponsored noopener noreferrer">Godot4: Build a 2D Action-Adventure Game (Udemy)</a></li>
</ul>
</div>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/9ce861a3079316212a5aaab591197063/20250613-Godot-Thumb.png" medium="image"/></item><item><title><![CDATA[I Got Tired of Searching for Speech Bubble Assets, So I Built 'Speech Bubble Studio']]></title><description><![CDATA[Searching stock sites for speech bubbles was too tedious, so I built 'Speech Bubble Studio.' Just drag the edge to grow a tail, and download as a transparent PNG instantly.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/speech-bubble-studio/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/speech-bubble-studio/</guid><pubDate>Thu, 12 Jun 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>I needed speech bubble images for blog posts and spent way too long browsing stock sites. The tail was facing the wrong way, the line thickness didn't match, or the shape was just slightly off from what I wanted.</p>
<p>Making them from scratch in Photoshop meant fiddling with paths and combining shapes — way too much effort for a simple speech bubble. So I decided to build a browser tool that lets you create them in seconds.</p>
<p><img src="/c8da9eec1d6a672241dbd152ea7ea25d/howto-speech-bubble-studio.webp" alt=""></p>
<p>Just drag the edge of the shape to grow a tail, and you can create any speech bubble you want in seconds.</p>
<p>:::post-link{url="/en/tools/speech-bubble-studio/" text="▶︎ Speech Bubble Studio"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#introduction">The Hassle of Searching for Speech Bubble Assets</a></li>
<li><a href="#what-is-sbs">What Is This Tool?</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#how-to-use">How to Use</a></li>
<li><a href="#teq">When Can You Use It?</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>The Hassle of Searching for Speech Bubble Assets {#introduction}</h2>
<p>Searching stock sites for speech bubbles comes with several problems. The tail points the wrong direction, the line thickness doesn't match your illustration, or you can't find that specific cloud shape you're looking for.</p>
<p>Trying to make your own in Photoshop — combining ellipses and triangles, merging paths — and you start thinking, "Why am I spending this much time on a single speech bubble?"</p>
<p>That's what motivated me to build a tool that makes creating speech bubbles quick and effortless.</p>
<h2>What Is This Tool? {#what-is-sbs}</h2>
<p><img src="/db8bd1dbd440c89edefc2f06f548c537/01-speech-bubble-studio.webp" alt=""></p>
<p>It's a browser-based speech bubble creator. Choose a base shape and line style, then just drag the edge of the preview area. A tail extends from where you drag, and you can freely adjust the tip position.</p>
<p>Your finished speech bubbles can be downloaded as transparent PNG images.</p>
<p>:::post-link{url="/en/tools/speech-bubble-studio/" text="▶︎ Speech Bubble Studio"}</p>
<p>:::ad</p>
<h2>Features {#features}</h2>
<ul>
<li>Drag the edge of the preview to create and adjust tails wherever you want</li>
<li>Customize shape (ellipse, rectangle, cloud, spiky), line thickness, and color</li>
<li>One-click presets like "comic style" and "thought bubble"</li>
<li>Choose between standard (1x) and high-resolution (2x) export as transparent PNG</li>
<li>Free for personal and commercial use (no credit required)</li>
<li>Touch support for smartphones and tablets</li>
</ul>
<h2>How to Use {#how-to-use}</h2>
<p>The basic workflow is three steps.</p>
<ol>
<li><strong>Adjust the base shape</strong></li>
</ol>
<p>Use the settings panel on the right to set the shape and line style. Presets make this easy, but you can also fine-tune width, height, line thickness, and more.</p>
<p><img src="/4820a6d204bfff22157c66e5f0ffb483/02-speech-bubble-studio.png" alt=""></p>
<ol start="2">
<li><strong>Create and adjust the tail</strong></li>
</ol>
<p>In the preview area on the left, click and drag the edge of the shape. A tail extends from that point — use the blue handle to adjust the tip position.</p>
<p><img src="/dcfae9f04b7ba9555ddaa122016a41e3/03-speech-bubble-studio.png" alt=""></p>
<ol start="3">
<li><strong>Download</strong></li>
</ol>
<p>In the "Export" section, choose a resolution (1x/2x) and click the "Download PNG" button. Done!</p>
<h2>When Can You Use It? {#teq}</h2>
<p>Here are some common use cases:</p>
<ul>
<li><strong>Blog and website images:</strong> Perfect for adding dialogue to illustrations or photos.</li>
<li><strong>YouTube thumbnails and video captions:</strong> Great for character dialogue effects in game commentaries and tutorials.</li>
<li><strong>Social media posts:</strong> Adding a speech bubble to an image instantly makes your post more engaging.</li>
</ul>
<p>:::ad</p>
<h2>Conclusion {#conclusion}</h2>
<p>I got tired of the hassle of searching for or hand-making speech bubbles, so I built a tool where you just drag to create them. All speech bubbles you make are free for personal and commercial use. I'd love it if you find it useful for your blog, videos, social media, and more.</p>
<p>:::post-link{url="/en/tools/speech-bubble-studio/" text="▶︎ Speech Bubble Studio"}</p>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/c8da9eec1d6a672241dbd152ea7ea25d/howto-speech-bubble-studio.webp" medium="image"/></item><item><title><![CDATA[I Wanted to Check My Illustration's Color Balance, So I Built the 'Character Color Analyzer']]></title><description><![CDATA[Have you ever wondered what your illustration's color scheme looks like objectively? I built the 'Character Color Analyzer' to visualize color composition ratios in a pie chart, letting you see your color balance in numbers.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/character-color-analyzer/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/character-color-analyzer/</guid><pubDate>Mon, 09 Jun 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>After finishing an illustration, have you ever wondered, "What does my color scheme actually look like from an objective standpoint?" Since we choose colors intuitively while creating, looking back at the finished piece often reveals surprises -- "I didn't realize that color took up so much area" or "Maybe the accent color was too subtle."</p>
<p>Using the eyedropper to sample colors only tells you "this color is used" -- it doesn't show you the overall color balance. If you could see "which colors occupy how much area" in actual numbers, your color habits and areas for improvement would become much clearer.</p>
<p>That's why I built the "Character Color Analyzer" -- a tool that visualizes color composition ratios in a pie chart just by uploading an illustration. All processing runs entirely in the browser, so you can safely analyze even unpublished works.</p>
<p><img src="/27bd3a968b25c341cbf578fe6ccc643d/thumb-character-color-analyzer.png" alt="">
<em>The reference image is fan art drawn by the site owner.</em></p>
<p>:::post-link{url="/en/tools/character-color-analyzer/" text="Character Color Analyzer"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#introduction">Why I Wanted to Know My Color Balance</a></li>
<li><a href="#what-is-cca">What Is This Tool?</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#how-to-use">How to Use It</a></li>
<li><a href="#usage-tips">How to Use the Analysis Results</a></li>
<li><a href="#advanced-features">How the Analysis Works</a></li>
<li><a href="#tech-info">Safe and Browser-Based</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Why I Wanted to Know My Color Balance {#introduction}</h2>
<p>When illustrating, I tend to choose colors by instinct. I'll think "this color feels right" as I paint, and after finishing, I wonder, "But objectively, what's the balance really like?"</p>
<p>Sampling colors with the eyedropper only tells you that a color is present -- it doesn't reveal the proportions. By understanding "which colors occupy how much area," you can uncover your own color habits and find areas to improve.</p>
<p>That's what motivated me to build a tool that visualizes color composition ratios in a pie chart.</p>
<h2>What Is This Tool? {#what-is-cca}</h2>
<p><img src="/7596a643dd0f9ad3bd1b097c5d4c5010/howto-character-color-analyzer.webp" alt=""></p>
<p>It's a tool that analyzes an uploaded illustration image and displays the types of colors used along with their proportions in a pie chart and list.</p>
<p>Use it whenever you want to check the color balance of your illustrations in concrete numbers. A feature to exclude unwanted colors (like backgrounds) from the analysis lets you focus specifically on the character's color scheme.</p>
<p>:::post-link{url="/en/tools/character-color-analyzer/" text="Character Color Analyzer"}</p>
<p>:::ad</p>
<h2>Features {#features}</h2>
<ul>
<li>Start analysis by simply dragging and dropping an image -- nothing is sent to a server</li>
<li>Visualize results with a pie chart and percentages</li>
<li>Colors extracted using CIELAB color space (perceptually uniform) and k-means clustering</li>
<li>Click unwanted areas (backgrounds, etc.) on the preview image to exclude them</li>
<li>Manage excluded colors in a list, with the option to restore any of them</li>
<li>Copy analyzed HEX codes with a single click</li>
<li>All processing happens entirely in the browser -- safe for unpublished works</li>
<li>Works on both PC and smartphone</li>
</ul>
<h2>How to Use It {#how-to-use}</h2>
<p>The basic workflow is 3 steps:</p>
<ol>
<li><strong>Upload an image</strong></li>
</ol>
<p><img src="/ae5d10aca716197c98a1c45264488333/character-color-analyzer-import.png" alt=""></p>
<p>Drag and drop the illustration you want to analyze onto the left area, or click to select a file. The image stays in your browser -- it's never sent to a server.</p>
<ol start="2">
<li><strong>Review and adjust the analysis results</strong></li>
</ol>
<p><img src="/27bd3a968b25c341cbf578fe6ccc643d/thumb-character-color-analyzer.png" alt=""></p>
<p>A pie chart and color list showing the color composition appear on the right. Use the slider on the left panel to adjust the number of colors to extract.</p>
<ol start="3">
<li><strong>Exclude unwanted colors (optional)</strong></li>
</ol>
<p><img src="/9e4557ce386ecfb78f9188ed851bbeef/character-color-analyzer-%E9%99%A4%E5%A4%96%E3%83%84%E3%83%BC%E3%83%AB.webp" alt=""></p>
<p>To exclude colors unrelated to the character (like backgrounds), click on the relevant area in the preview image on the left. The clicked region's color is added to the exclusion list, and the analysis results update in real time.</p>
<h2>How to Use the Analysis Results {#usage-tips}</h2>
<p>The analysis results can yield all kinds of insights:</p>
<ul>
<li><strong>View your work objectively:</strong>
Analyze a completed illustration and you might discover things like "a particular color dominates more than I expected" or "the accent color was too weak." It's a great way to identify your color habits and areas for improvement.</li>
<li><strong>Expand your color toolkit:</strong>
Analyze your past works and ask yourself, "Why does this color scheme feel good?" By understanding main-to-sub color ratios and accent color usage in numbers, you can turn intuitive "sense" into concrete "technique."</li>
<li><strong>Build a color reference library:</strong>
Analyze various illustrations and collect the results to create your own personal color reference library. You'll learn practical color rules like "energetic characters tend to have high warm-color ratios" or "cool characters use neutral tones effectively."</li>
</ul>
<p><em>When analyzing others' works as color references, please be mindful of copyright and limit use to personal study.</em></p>
<p>:::ad</p>
<h2>How the Analysis Works {#advanced-features}</h2>
<p>Behind its simple appearance, the tool uses carefully chosen techniques for accurate analysis.</p>
<p><strong>CIELAB Color Space -- Closer to Human Perception:</strong>
Instead of standard RGB values, color calculations use the "CIELAB" color space, which closely models human color perception. This groups pixels that "look similar to the human eye" rather than pixels that are merely "numerically close" in RGB.</p>
<p><strong>k-Means Clustering -- Smart Average Color Extraction:</strong>
Illustrations contain countless colors and gradients. To extract "representative colors" from this complexity, the tool uses k-means clustering. It automatically classifies the image's pixels into the specified number of color groups and extracts the average color from each group.</p>
<h2>Safe and Browser-Based {#tech-info}</h2>
<p>This tool is designed with privacy as the top priority. From image selection through analysis to displaying results, all processing happens entirely within the browser.</p>
<p>Image data is never sent to or stored on any external server. The tool even works offline, so you can analyze unpublished works and personal illustrations without any risk of data leaks.</p>
<p>:::post-link{url="/en/tools/character-color-analyzer/" text="Character Color Analyzer"}</p>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/7596a643dd0f9ad3bd1b097c5d4c5010/howto-character-color-analyzer.webp" medium="image"/></item><item><title><![CDATA[I Needed Color Palette Ideas, So I Built the 'Character Color Navigator']]></title><description><![CDATA[Coming up with character color schemes is trickier than it seems. I kept falling into the same patterns, so I built the 'Character Color Navigator' -- a tool that suggests color palettes based on color theory from just a single base color.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/character-color-navigator/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/character-color-navigator/</guid><pubDate>Sat, 31 May 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Coming up with color schemes for characters is surprisingly difficult. Have you ever found yourself thinking, "What colors should I combine?" or realized, "I keep ending up with the same palette every time"?</p>
<p>Even if you know the basic color rules, generating fresh ideas from them isn't easy. "What clothing color goes with this hair color?" "What should the accent color be?" -- once you start thinking about it, the time adds up.</p>
<p>That's why I built the "Character Color Navigator," a tool that suggests color palettes based on color theory from just a single base color.</p>
<p><img src="/a94af2609a52db763e2778093fe4ca3a/howto-character-color-palette.webp" alt=""></p>
<p>:::post-link{url="/en/tools/character-color-navigator/" text="Character Color Navigator"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#introduction">Struggling to Find Color Palette Ideas</a></li>
<li><a href="#what-is-ccn">What Is This Tool?</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#how-to-use">How to Use It</a></li>
<li><a href="#palette-types">Types of Color Palettes</a></li>
<li><a href="#advanced-tips">Making the Most of Palette Images</a></li>
<li><a href="#tech-info">Runs Entirely in the Browser</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Struggling to Find Color Palette Ideas {#introduction}</h2>
<p>Whenever I'm working on character design, choosing color schemes is always a challenge. With essentially infinite color combinations, it's hard not to get stuck wondering, "Which one is best?"</p>
<p>I know that warm, bright colors give an "energetic" impression and cool, dark colors convey "calmness" or "mystery," but when it comes to actually putting combinations together, I always end up with similar patterns.</p>
<p>I wanted fresh color palette ideas, so I built this tool.</p>
<h2>What Is This Tool? {#what-is-ccn}</h2>
<p><img src="/93d2bc811fd88e41f9e65ce607a9922b/character-color-navigator.png" alt=""></p>
<p>It's a tool that automatically suggests color palettes based on color theory, just by selecting a single base color.</p>
<p>"I've decided on the hair color, but what about the outfit?" "I want to know bolder color combinations!" "I want to try something different from my usual palette" -- this tool is for moments like these.</p>
<p>The suggested palettes also consider color roles (base, assort, accent), so they can be applied directly in your design work.</p>
<p>:::post-link{url="/en/tools/character-color-navigator/" text="Character Color Navigator"}</p>
<p>:::ad</p>
<h2>Features {#features}</h2>
<ul>
<li>Generate multiple color palettes instantly by selecting just one color</li>
<li>Covers a wide range of color theories: Monochromatic, Analogous, Complementary, Triadic, and more</li>
<li>Filter palettes by theme: "Calm &#x26; Harmonious," "Bold &#x26; Impactful," and others</li>
<li>Displays guideline ratios for each color's role as Base, Assort, or Accent color</li>
<li>One-click HEX code copy for any displayed color</li>
<li>Save all palettes in the current theme (up to 8) as a single PNG image</li>
<li>Brief descriptions of each palette's characteristics and the impression it creates</li>
<li>Suggests not only classic color theory combinations but also unexpected pairings</li>
<li>Runs entirely in the browser -- no software installation required</li>
<li>Simple interface that anyone can use</li>
</ul>
<h2>How to Use It {#how-to-use}</h2>
<p>The basic workflow is 3 steps:</p>
<ol>
<li><strong>Choose a base color</strong></li>
</ol>
<p><img src="/12c34e8f9bf73bf30fdf85bf4de4bee6/02-character-color-navigator.webp" alt=""></p>
<p>In the control panel on the left, select the "base color" that represents your character's image. Use the color picker to choose intuitively, or enter a HEX code directly.</p>
<ol start="2">
<li><strong>Select a color theme</strong></li>
</ol>
<p><img src="/fcd322d53e9e6d160ccee35586a19d8d/03-character-color-navigator.webp" alt=""></p>
<p>From the "Choose a Color Theme" section, select a theme that matches the mood you're going for. Options include "Calm &#x26; Harmonious," "Bold &#x26; Impactful," "Balanced &#x26; Colorful," and more. When you select a theme, matching color palettes automatically appear on the right.</p>
<ol start="3">
<li><strong>Review and save palettes</strong></li>
</ol>
<p>Check out the generated color palettes. Hover over any color to see its HEX code, and click to copy it. When you find a scheme you like, click "Save Palette Image" to download all palettes in the current theme (up to 8) as a single image.</p>
<h2>Types of Color Palettes {#palette-types}</h2>
<p>The tool suggests palettes based on various color theories:</p>
<ul>
<li><strong>Monochromatic:</strong></li>
</ul>
<p>A palette that uses a single hue and varies only its lightness and saturation. Creates a cohesive, calm impression.</p>
<ul>
<li><strong>Analogous:</strong></li>
</ul>
<p>Uses colors that sit next to each other on the color wheel. Produces a natural, easy-on-the-eyes look with a gentle, approachable feel.</p>
<ul>
<li><strong>Complementary:</strong></li>
</ul>
<p>Combines colors directly opposite each other on the color wheel. Creates strong color contrast for a bold, dynamic impression.</p>
<ul>
<li><strong>Split Complementary:</strong></li>
</ul>
<p>Pairs the base color with the two colors adjacent to its complement. Offers the vibrancy of complementary colors but with more harmony and stability.</p>
<ul>
<li><strong>Triadic:</strong></li>
</ul>
<p>Uses three colors evenly spaced around the color wheel. Provides diverse color use while maintaining balanced stability and a lively, vibrant impression.</p>
<ul>
<li><strong>Tetradic:</strong></li>
</ul>
<p>Uses four colors forming a rectangle or square on the color wheel. Enables rich, diverse expression, though careful attention to color proportion and area balance is important.</p>
<p>Beyond these, many other variations are available. Experiment and explore.</p>
<p>:::ad</p>
<h2>Making the Most of Palette Images {#advanced-tips}</h2>
<p>Downloaded palette images aren't just for reference -- you can integrate them directly into your illustration workflow.</p>
<p>Load the saved palette image as a new layer in your illustration software (CLIP STUDIO PAINT, Photoshop, Procreate, SAI, etc.). Then simply use the eyedropper tool to pick colors from that layer. This eliminates the need to manually enter HEX codes one by one, letting you apply colors to your artwork quickly.</p>
<h2>Runs Entirely in the Browser {#tech-info}</h2>
<p>This tool is designed so that all processing happens entirely in the browser. No special applications need to be installed.</p>
<p>Color selection, palette calculations, and palette image generation all happen within the browser (primarily using JavaScript and the HTML5 Canvas API). Your base color input and generated palette data are never sent to any external server.</p>
<p>:::post-link{url="/en/tools/character-color-navigator/" text="Character Color Navigator"}</p>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/a94af2609a52db763e2778093fe4ca3a/howto-character-color-palette.webp" medium="image"/></item><item><title><![CDATA[I Needed Quick Sound Effects, So I Built the 'Simple Sound FX Generator']]></title><description><![CDATA[When you're developing games or editing videos, sometimes you just need a quick sound effect. Searching for free assets was too time-consuming, so I built the 'Simple Sound FX Generator' -- a browser tool that lets you create original sound effects just by dragging on an XY pad.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/simple-sound-fx-generator/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/simple-sound-fx-generator/</guid><category><![CDATA[audio]]></category><pubDate>Fri, 30 May 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>When you're developing games or editing videos, there are moments when you think, "I just need a quick sound effect." Free asset libraries are an option, but finding something that matches your exact vision can take more time than expected.</p>
<p>"Wouldn't it be great if I could just make sound effects myself, quickly and easily?" That thought led me to build a browser-based tool for generating sound effects on the fly.</p>
<p><img src="/4cdc9ed4f0065e17a9d8d4df0594341b/01_simpleSoundFXGenerator.png" alt=""></p>
<p>Just drag on the XY pad with your mouse to intuitively adjust pitch and sound variation. Created sounds can be downloaded as WAV files, and commercial use is allowed.</p>
<p>:::post-link{url="/en/tools/simple-sound-fx-generator/" text="Simple Sound FX Generator"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#overview">Why I Needed Sound Effects</a></li>
<li><a href="#what-is-tool">What Is This Tool?</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#usage">How to Use It</a></li>
<li><a href="#sound-tips">Tips for Sound Adjustment</a></li>
<li><a href="#implementation">Runs Entirely in the Browser</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Why I Needed Sound Effects {#overview}</h2>
<p>When prototyping a game, you inevitably need little sound effects like "jump sounds" or "item pickup sounds." Free asset sites are fine, but finding the right match for what you have in mind can eat up valuable time.</p>
<p>"It would be so convenient if I could just whip them up myself" -- and with that, I built a browser-based sound effect generator.</p>
<h2>What Is This Tool? {#what-is-tool}</h2>
<p><img src="/4a0d1b20bea25a77db98e60bf283d485/02_simpleSoundFXGenerator.webp" alt=""></p>
<p>It's a browser-based sound effect creation tool. Just drag on the XY pad with your mouse to intuitively adjust pitch and sound variation.</p>
<p>Beyond basic waveform, duration, and volume controls, you can also fine-tune the envelope (how volume changes over time) and pitch slide (how pitch changes), enabling a wide variety of sounds.</p>
<p>Presets for common retro 2D game sound effects are included, so you can start creating sounds immediately -- no sound design expertise needed.</p>
<h2>Features {#features}</h2>
<ul>
<li>Real-time pitch and pitch variation adjustment via XY pad (mouse or touch)</li>
<li>Freely customizable waveform selection (sine, square, etc.), duration, volume, and ADSR envelope</li>
<li>Built-in 2D game sound effect presets: Jump, Shoot, Explosion, and more</li>
<li>Save created sounds as WAV files</li>
<li>Free for both personal and commercial use</li>
<li>Runs entirely in the browser -- no software installation required</li>
</ul>
<h2>How to Use It {#usage}</h2>
<p>The basic workflow is 3 steps:</p>
<ol>
<li><strong>Create the basic sound</strong></li>
</ol>
<p><img src="/7a888f92857d529315f19a57eec70ac4/03_simpleSoundFXGenerator.png" alt=""></p>
<p>Use the "Sound Settings" and "Envelope" controls on the left to define the foundation of your sound. Choose a waveform, set the duration and volume. Then adjust the Attack (how quickly the sound starts), Decay (transition from peak to sustained volume), Sustain (the sustained volume level), and Release (the fadeout after the sound ends).</p>
<p>Tip: Setting each envelope time shorter than the overall sound duration will help you get the sound you're going for.</p>
<ol start="2">
<li><strong>Adjust the sound with the Sound Pad</strong></li>
</ol>
<p><img src="/569c435c5c4b019b1d795031fed11de4/04_simpleSoundFXGenerator.webp" alt=""></p>
<p>Use the "Sound Pad" on the right to adjust pitch and variation. Drag your mouse or finger across the pad -- the cursor's vertical position (Y-axis) controls pitch, while horizontal position (X-axis) controls pitch change. Experiment to discover interesting sounds.</p>
<ol start="3">
<li><strong>Play &#x26; Download</strong></li>
</ol>
<p><img src="/bd186dee12581828c338082023780dee/05_simpleSoundFXGenerator.png" alt=""></p>
<p>Click "Play" to preview your creation. Once you're satisfied, click "Download" to save it as a WAV file.</p>
<p>Starting from a preset and fine-tuning the parameters is also a great approach.</p>
<p>:::ad</p>
<h2>Tips for Sound Adjustment {#sound-tips}</h2>
<p>Here are some tips for creating the sound effects you have in mind:</p>
<ul>
<li><strong>Using the XY Pad:</strong>
<ul>
<li><strong>Y-axis (vertical):</strong> Controls the base pitch. Higher position = higher pitch, lower position = lower pitch.</li>
<li><strong>X-axis (horizontal):</strong> Controls pitch change while the sound plays. Moving left lowers the pitch from the starting note; moving right raises it. Near the center, pitch change is minimal.</li>
</ul>
</li>
<li><strong>Shaping the sound with Envelope (ADSR):</strong>
<ul>
<li><strong>Attack:</strong> Short = crisp, punchy sound; Long = soft, gradual onset.</li>
<li><strong>Decay &#x26; Sustain:</strong> Short decay with low sustain = quick, decaying sound; Long decay with high sustain = sustained, ringing sound.</li>
<li><strong>Release:</strong> Short = abrupt cutoff; Long = lingering fadeout.</li>
</ul>
</li>
<li><strong>Waveform Types:</strong>
<ul>
<li><strong>Sine:</strong> Smooth, soft tone. Great for whistles and "pew" sounds.</li>
<li><strong>Square:</strong> Chiptune-style sound. Perfect for retro game sound effects.</li>
<li><strong>Sawtooth:</strong> Harsh, overtone-rich buzzing sound. Good as the core of explosion effects.</li>
<li><strong>Triangle:</strong> Slightly harder than sine but softer than square -- a nice middle ground.</li>
</ul>
</li>
</ul>
<p>Mix and match parameters to discover your own original sounds.</p>
<h2>Runs Entirely in the Browser {#implementation}</h2>
<p>This tool is designed so that all processing happens entirely in the browser. No special software installation is needed.</p>
<p>Sound generation uses the Web Audio API built into your browser. Real-time sound manipulation and WAV file export are all performed within the browser.</p>
<p>:::post-link{url="/en/tools/simple-sound-fx-generator/" text="Simple Sound FX Generator"}</p>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/569c435c5c4b019b1d795031fed11de4/04_simpleSoundFXGenerator.webp" medium="image"/></item><item><title><![CDATA[How to Customize a 'Punipuni Avatar' with Your Own 2D Art in VRChat]]></title><description><![CDATA[Bring your own illustrations to life in VRChat! An easy-to-follow guide on customizing the popular 'Punipuni Avatar' with your artwork.]]></description><link>https://uhiyama-lab.com/en/blog/dialy/vrchat-punipuni-install/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/dialy/vrchat-punipuni-install/</guid><category><![CDATA[vrchat]]></category><pubDate>Wed, 02 Apr 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="/1ed3534ac1987f6eda0d2a8c03b36229/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-010-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB%E3%82%A2%E3%83%90%E3%82%BF%E3%83%BC.webp" alt="A customized Punipuni Avatar moving in VRChat"></p>
<p>Starting around February 2025, you may have noticed a growing number of "Punipuni Avatars" in VRChat -- avatars that look like 2D illustrations brought to life.</p>
<p>The trend was kicked off by rio3d's "<a href="https://rio3d.booth.pm/items/6556549">[3D Model] #PunipuniAvatar NagiChan</a>" released on Booth. By using this avatar as a base, you can make your own illustrations move around in VRChat's 3D world.</p>
<p>It's a delightful concept -- like Paper Mario, where a flat character walks through a three-dimensional space.</p>
<p>This guide walks you through how to replace the artwork in "#PunipuniAvatar NagiChan" with your own illustrations to create an original Punipuni Avatar.</p>
<p>What you'll need:</p>
<ul>
<li><a href="https://rio3d.booth.pm/items/6556549">[3D Model] #PunipuniAvatar NagiChan</a> (Purchase on Booth)</li>
<li>Unity + VRChat Creator Companion (VCC) environment set up</li>
<li>Your own illustrations in the specified format (4 images, details below)</li>
</ul>
<p>The process is simple:</p>
<ol>
<li>Import "#PunipuniAvatar NagiChan" into your Unity project</li>
<li>Replace the avatar's illustrations (textures) with your own</li>
<li>Upload to VRChat</li>
</ol>
<p>That's all it takes to get your character moving in VRChat. Let's walk through the detailed steps.</p>
<p>:::post-link{url="<a href="https://rio3d.booth.pm/items/6556549">https://rio3d.booth.pm/items/6556549</a>" text="[3D Model] #PunipuniAvatar NagiChan (BOOTH)"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#import-avatar">Importing the Punipuni Avatar into Unity</a></li>
<li><a href="#modify-avatar">Easy! Replacing the Punipuni Avatar's Illustrations</a>
<ul>
<li><a href="#prepare-images">Preparing Your Replacement Illustrations</a></li>
<li><a href="#overwrite-files">Overwriting the Image Files in Unity</a></li>
<li><a href="#confirm-upload">Verifying in Unity and Uploading to VRChat</a></li>
</ul>
</li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Step 1: Importing the Punipuni Avatar into Unity {#import-avatar}</h2>
<p>First, drag and drop the <code>.unitypackage</code> file included in the "#PunipuniAvatar NagiChan" folder (purchased from Booth) into the Assets window of your VRChat project.</p>
<p><img src="/c8fd43247555a74f10d668ca59936e74/001-%E3%83%97%E3%83%AD%E3%82%B8%E3%82%A7%E3%82%AF%E3%83%88%E5%B0%8E%E5%85%A5.png" alt="Drag and drop the Punipuni Avatar unitypackage into Unity"></p>
<p>Once the import is complete, find the Prefab named "<strong>PUNI</strong>" inside the <code>Assets/PUNIPUNI_AVATAR</code> folder in the Project window, and drag it into the Hierarchy window (scene).</p>
<p><img src="/cb85ad7c52e50506a92d0851bf9c5feb/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-002-%E3%82%A4%E3%83%B3%E3%83%9D%E3%83%BC%E3%83%88%E5%AE%8C%E4%BA%86.png" alt="After import, place the PUNI Prefab in the Hierarchy"></p>
<p>At this point, if you follow the VRChat Creator Companion (VCC) upload process, you can use the original "NagiChan" Punipuni Avatar as-is.</p>
<hr>
<h2>Step 2: Easy! Replacing the Punipuni Avatar's Illustrations {#modify-avatar}</h2>
<p>Now for the main event -- the actual customization. The Punipuni Avatar's appearance and animations are controlled by four PNG images (textures). Simply replacing these images with your own illustrations creates an original avatar.</p>
<p><img src="/dbf65a17f05d993b8fd0c282d6b94869/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-003-%E8%A1%A8%E7%A4%BA%E7%B4%A0%E6%9D%90.png" alt="The four PNG images that make up the Punipuni Avatar"></p>
<p>These images are located in the <code>Assets/PUNIPUNI_AVATAR/Materials</code> folder of your Unity project:</p>
<ul>
<li><code>tex_nagi_default.png</code>: The illustration for the default state (standing/idle)</li>
<li><code>tex_nagi_default_talk.png</code>: The illustration for lip sync (talking)</li>
<li><code>tex_nagi_walk_1.png</code>: Walking animation frame 1</li>
<li><code>tex_nagi_walk_2.png</code>: Walking animation frame 2</li>
</ul>
<p>In VRChat, the <code>default</code> image is shown while standing, <code>talk</code> when speaking, and <code>walk_1</code> and <code>walk_2</code> alternate when walking to create an animation effect.</p>
<p>Unlike typical 3D avatar customization, no complex positioning adjustments are needed. The simplest approach is to prepare your own illustrations with the exact same filenames as the four original files, then overwrite the originals. This preserves all internal references while swapping the visuals to your own artwork.</p>
<p>:::ad</p>
<h3>2-1. Preparing Your Replacement Illustrations {#prepare-images}</h3>
<p>First, prepare your own illustrations corresponding to the four states described above. Key points:</p>
<ul>
<li><strong>Image size:</strong> The original NagiChan images are 3035x3035 pixels. Ideally, create your illustrations at the same size with the character centered. (Different sizes will work but may cause display misalignment.)</li>
<li><strong>Filename:</strong> Use the exact same filenames as the originals (e.g., <code>tex_nagi_default.png</code>).</li>
<li><strong>File format:</strong> Save as PNG (transparent backgrounds are supported).</li>
</ul>
<p>I created my illustrations based on a modified version of a Manuka-chan avatar I had previously made. (I gave them a slight pixel-art look using the "<a href="/en/tools/image-to-pixelization/">Pixel Art Converter</a>" web tool from this blog.)</p>
<p>:::post-link{url="/en/tools/image-to-pixelization/" text="Pixel Art Converter"}</p>
<p><img src="/1cdc2a2522dc30637fb9aa236f2f58c7/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-004-%E7%BD%AE%E3%81%8D%E6%8F%9B%E3%81%88%E7%B4%A0%E6%9D%90.png" alt="Four replacement illustrations I created"></p>
<p>(From left to right: default, lip sync, walk 1, walk 2)</p>
<p><img src="/04e252ad57f1cd36e48091df825367e8/20241121-0-VRChat%E3%81%AF%E3%81%98%E3%82%81%E3%81%A6%E3%81%AE%E3%83%A2%E3%83%87%E3%83%AB%E5%B0%8E%E5%85%A51.webp" alt="The original Manuka-chan avatar I based my illustrations on"></p>
<p>Rename your four illustrations (drawn or prepared) to match the corresponding original filenames.</p>
<p><img src="/e66cff87572ee241bc89da681f4d5965/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-005-%E7%B4%A0%E6%9D%90%E3%83%AA%E3%83%8D%E3%83%BC%E3%83%A0.png" alt="Renaming illustrations to match the original filenames"></p>
<h3>2-2. Overwriting the Image Files in Unity {#overwrite-files}</h3>
<p>Next, locate the original image files in the Unity editor and overwrite them with your own illustrations.</p>
<ol>
<li>
<p>In Unity's Project window, open the <code>Assets/PUNIPUNI_AVATAR/Materials</code> folder.</p>
</li>
<li>
<p>Right-click on one of the original NagiChan PNG files (e.g., <code>tex_nagi_default.png</code>) and select "Show in Explorer" (or "Reveal in Finder" on Mac). This opens the actual folder where these files are stored.</p>
</li>
</ol>
<p><img src="/0d81074ccd01036dec4209884d248fa3/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-006-%EF%BE%85%EF%BD%B7%EF%BE%9E%EF%BE%81%EF%BD%AC%EF%BE%9D%E7%94%BB%E5%83%8F.png" alt="Right-click a PNG file in Unity&#x27;s Materials folder and select Show in Explorer"></p>
<ol start="3">
<li>Copy and paste (or drag and drop) all four of your prepared illustration PNG files into the opened folder.</li>
</ol>
<p><img src="/209eb0768a6d747370345d8166a94575/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-007-%E6%94%B9%E5%A4%89%E7%B4%A0%E6%9D%90%E3%82%A4%E3%83%B3%E3%83%9D%E3%83%BC%E3%83%88.png" alt="Copy and paste your custom illustrations into the opened folder"></p>
<ol start="4">
<li>A warning saying "File already exists" will appear. Select "Replace" to overwrite all four files.</li>
</ol>
<p><img src="/127c133661da57a9542ee4372c900396/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-008-%E7%94%BB%E5%83%8F%E3%81%AE%E4%B8%8A%E6%9B%B8%E3%81%8D.png" alt="File overwrite confirmation dialog"></p>
<h3>2-3. Verifying in Unity and Uploading to VRChat {#confirm-upload}</h3>
<p>Once the files are overwritten, switch back to the Unity editor. Unity will automatically detect the file changes and run an import process. After a moment, the avatar's appearance in the Project window and Scene view should update to show your illustrations.</p>
<p><img src="/94fb1e62fe51c58506b207988f91fbc9/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-009-Unity%E3%82%A4%E3%83%B3%E3%83%9D%E3%83%BC%E3%83%88.webp" alt="Confirming the illustrations have been replaced in the Unity editor"></p>
<p>(The avatar now displays my custom illustrations in Unity.)</p>
<p>From here, the process is the same as any standard VRChat avatar upload. Open the VRChat SDK Control Panel and click "Build &#x26; Publish for Windows" to upload your avatar.</p>
<p>And that's it -- your original Punipuni Avatar featuring your own illustrations is complete!</p>
<p><img src="/1ed3534ac1987f6eda0d2a8c03b36229/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-010-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB%E3%82%A2%E3%83%90%E3%82%BF%E3%83%BC.webp" alt="The finished original Punipuni Avatar moving in VRChat"></p>
<p>Now go ahead and let your character roam the world of VRChat.</p>
<p>:::post-link{url="<a href="https://rio3d.booth.pm/items/6556549">https://rio3d.booth.pm/items/6556549</a>" text="[3D Model] #PunipuniAvatar NagiChan (BOOTH)"}</p>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/1ed3534ac1987f6eda0d2a8c03b36229/20250402-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB-010-%E3%81%B7%E3%81%AB%E3%81%B7%E3%81%AB%E3%82%A2%E3%83%90%E3%82%BF%E3%83%BC.webp" medium="image"/></item><item><title><![CDATA[How to Connect Claude API with Cursor: Setup Guide After Premium Model Limits]]></title><description><![CDATA[A step-by-step guide on how to set up Claude API integration with Cursor after reaching the 500 premium model request limit]]></description><link>https://uhiyama-lab.com/en/blog/dialy/cursor-claude-api-setup/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/dialy/cursor-claude-api-setup/</guid><category><![CDATA[cursor]]></category><pubDate>Thu, 20 Mar 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>The AI code editor "<a href="https://www.cursor.com/">Cursor</a>" has been making waves among developers lately. Drawn by all the buzz, I finally started using it myself.</p>
<p><img src="/b69dec85557d8b0ea98f5ba92dc3fa31/20250320-Cursor-Claude-API_00.png" alt="Screenshot of the Cursor editor"></p>
<p>I had already experienced a 10x speed improvement using tools like ChatGPT Pro and Claude 3, but after adopting Cursor, I felt <strong>another 10x boost -- roughly 100x faster than my original workflow</strong>. I'm completely sold on Cursor.</p>
<p>What makes Cursor so powerful is the ability to interact with AI directly inside the editor. While coding, you can ask things like "refactor this group of components" or paste a console error and ask "what's causing this and how do I fix it?" AI-generated code changes are displayed as color-coded diffs (like Git), making it easy to reject unintended modifications. Package installations and file deletions only execute after you click the "Accept" button, so you stay in control.</p>
<p>While web-based AI tools like ChatGPT and Claude can do similar things, Cursor provides suggestions with a deeper understanding of your entire project's file structure and dependencies, resulting in more accurate fixes and a much smoother development experience.</p>
<p>Additionally, web-based AI tools can become unreliable or cut off during long conversations or large code generation tasks. Cursor, on the other hand, handles massive modifications all at once.</p>
<p>Occasionally (especially with high-performance models), overly aggressive modifications may occur, but the "Restore" feature lets you easily roll back file states to the point before your last instruction. (However, recovering deleted folders can be difficult, so reviewing code before clicking Accept is important.)</p>
<p>Now, let's get to the main topic.</p>
<p><img src="/a800d81d00acb4119996a5428fa3b2aa/20250320-Cursor-Claude-API_01.png" alt="Cursor&#x27;s premium model usage limit notification"></p>
<p>Cursor's Pro plan has a <strong>monthly limit of 500 requests</strong> for using high-performance <strong>premium models</strong> in Fast mode.</p>
<p>Think of it as 500 tickets per month for high-speed AI processing. This is plenty for typical usage, but if you get deeply engrossed in tool development or large-scale refactoring like I did, you can hit the limit within just a few days.</p>
<p>Once you exceed the limit, you can still use Slow mode, but once you've experienced the speed of Fast mode, Slow mode can feel painful.</p>
<p>Cursor doesn't offer a direct plan to "increase premium model usage." To maintain Fast mode beyond the limit, you need to <strong>set up your own API key from OpenAI or Anthropic and pay per usage</strong>.</p>
<p>This article provides a clear, step-by-step guide for anyone looking to connect Claude API with Cursor, covering <strong>how to obtain a Claude API key from Anthropic Console, add credits, and configure it in Cursor</strong>.</p>
<p>:::ad</p>
<hr>
<p><strong>In This Article</strong></p>
<ol>
<li><a href="#cursor-premium">What Are Cursor's Premium Models? Understanding the Limits and Fast Mode</a></li>
<li><a href="#api-setup">Setting Up an Anthropic API Key (From Account Creation to Purchasing Credits)</a></li>
<li><a href="#cursor-connect">How to Connect Cursor with Claude API</a></li>
<li><a href="#api-combination">How Pay-Per-Use API Integration Works and What to Watch Out For</a></li>
<li><a href="#cost-performance">Real-World Costs After Claude API Integration and Alternatives</a></li>
<li><a href="#summary">Summary: Maintaining Development Efficiency with Cursor and Claude API</a></li>
<li><a href="#slowmode">Update: Slow Mode Might Actually Be Usable?</a></li>
<li><a href="#gemini-option">Update 2: The Reality of Slow Mode and Gemini 2.5 Pro as a Powerful Alternative (April 2025)</a></li>
</ol>
<hr>
<h2>What Are Cursor's Premium Models? Understanding the Limits and Fast Mode</h2>
<p>Cursor comes with multiple AI models built in, and the most capable ones are categorized as "<a href="https://docs.cursor.com/settings/models">Premium Models</a>." Notable examples include:</p>
<ul>
<li><strong>Claude 3.5 Sonnet</strong> (by Anthropic)</li>
<li><strong>GPT-4 / GPT-4o</strong> (by OpenAI)</li>
<li>Other cutting-edge high-performance models</li>
</ul>
<p>These premium models excel at complex code generation, advanced debugging assistance, and accurate Q&#x26;A, delivering significantly better results than standard models.</p>
<p>Cursor's paid "Pro plan" ($20/month) allows you to use these premium models in <strong>"Fast" mode up to 500 times per month</strong>. In Fast mode, your AI requests are processed with priority, letting you work without any waiting.</p>
<p>Once you exceed the 500 monthly requests, you're automatically switched to <strong>"Slow" mode</strong>. In Slow mode, AI responses take longer, and during peak server times, you might experience wait times ranging from seconds to minutes.</p>
<p>However, by enabling the <strong>"Enable usage-based pricing"</strong> option in Cursor's settings, you can maintain Fast mode beyond the 500-request limit. In this case, additional charges are incurred based on the token volume (amount of text processed) through the connected API service (Anthropic, in this case).</p>
<p><img src="/6f5f9bf38446587643cf6bdf49f79f73/20250320-Cursor-Claude-API_02.png" alt="Cursor&#x27;s usage-based pricing settings screen"></p>
<p>When you enable "Enable usage-based pricing," the settings shown above will appear.</p>
<p>The key setting here is the <strong>"Monthly Spending Limit."</strong> This is a safety feature that automatically stops API usage through Cursor once your pay-per-use charges reach the set limit. Note that this is separate from the Cursor Pro plan fee ($20) -- you pay the API charges directly to the API provider (Anthropic or OpenAI).</p>
<p><img src="/ea9ec90ec7050803c9d25be5e1aa1b6d/20250320-Cursor-Claude-API_03.png" alt="Cursor AI model usage history"></p>
<p>At the bottom of Cursor's settings screen, you can view detailed usage history showing which models you've used and how many times.</p>
<p>Items labeled "Included in Pro" represent the 500 monthly premium model requests included in the Pro plan. Items labeled "User API Key" represent usage through your <strong>own API key (such as Claude API)</strong>, which incurs pay-per-use charges.</p>
<p>Looking at the history, you'll notice entries like "Aborted, Not Charged" and "Errored, Not Charged." This means you won't be billed for requests that were cancelled or failed -- a welcome detail for users.</p>
<p>:::ad</p>
<h2>Setting Up an Anthropic API Key (From Account Creation to Purchasing Credits)</h2>
<p>Let's walk through the steps to obtain an <strong>Anthropic API key</strong> so you can use Anthropic models like Claude 3.5 Sonnet with pay-per-use pricing in Cursor.</p>
<h3>Step 1: Create an Account on Anthropic Console</h3>
<ul>
<li>Visit Anthropic's official website at <a href="https://console.anthropic.com">console.anthropic.com</a>.</li>
<li>Click "Sign Up," set your email address and password to create an account, then verify your email through the confirmation message sent to your inbox.</li>
<li>After logging in, fill in any required basic information such as your name or organization name.</li>
</ul>
<h3>Step 2: Generate an API Key</h3>
<ul>
<li>After logging in, select "API Keys" from the left menu.</li>
<li>Click the "Create Key" button.</li>
<li>Enter a name to identify your API key (e.g., "Cursor-Integration" -- something descriptive is recommended).</li>
<li>Your API key will be generated and displayed on screen. This key is <strong>shown only once</strong>, so be sure to copy it and store it in a secure location like a password manager.</li>
</ul>
<p><img src="/039505ae0c9587a8b2dafdd492c6fa29/20250320-Cursor-Claude-API_04.png" alt="API key creation screen on Anthropic Console"></p>
<h3>Step 3: Set Up Payment Information and Purchase Credits</h3>
<ul>
<li>Navigate to "Plans &#x26; Billing" from the left menu.</li>
<li>To use the API, you need to purchase credits in advance. Click "Add Payment Method" and register your credit card information.</li>
<li>Next, purchase credits in the "Purchase credits" section. For testing purposes, the minimum amount of <strong>$5</strong> should be sufficient.</li>
<li>There's also an "Auto-reload" feature that automatically purchases additional credits when your balance drops below a certain amount. However, to avoid unintended charges, <strong>it's recommended to keep this turned off initially</strong>.</li>
</ul>
<p><img src="/474b5f35a1c79d143093f786694ca52c/20250320-Cursor-Claude-API_05.png" alt="Credit purchase screen on Anthropic Console"></p>
<p><img src="/d5b00c876975b0608ffe1cb59c51da39/20250320-Cursor-Claude-API_06.png" alt="Anthropic credit purchase button"></p>
<p>Once the credit purchase is complete, your balance (e.g., $5.00) will be displayed on the dashboard. Each time you use the Claude API through Cursor, charges will be deducted from this balance.</p>
<p><img src="/bb5a02d6095a6f572ee108b396a67af9/20250320-Cursor-Claude-API_07.png" alt="Anthropic auto-reload settings screen"></p>
<p>The "Auto Reload" settings allow configurations like "automatically add $10 when the balance drops below $5." This is convenient if you use the API frequently and find manual top-ups tedious, but it's best to assess your usage patterns before enabling it.</p>
<p>:::ad</p>
<h2>How to Connect Cursor with Claude API</h2>
<p>Once your Anthropic Console setup is complete, it's time to connect Cursor with the Claude API. The process is straightforward.</p>
<h3>Step 1: Register the Anthropic API Key in Cursor</h3>
<ul>
<li>Open the Cursor editor and navigate to "Settings" > "Configure models" from the menu bar (or settings screen).</li>
<li>Find the "Anthropic API Key" field in the settings.</li>
<li><strong>Paste the API key</strong> you previously obtained from Anthropic Console and saved securely.</li>
<li>Click "Save" or "Apply" to save the settings (the UI may vary slightly depending on the version).</li>
</ul>
<p><img src="/1e1e93bc17de2c6da7a31bc7994ee76a/20250320-Cursor-Claude-API_08.png" alt="Anthropic API key input field in Cursor settings"></p>
<h3>Step 2: Verify the Connection and Start Using It</h3>
<ul>
<li>Once configured, try using Cursor's AI features (code generation, chat, etc.). Specifically, try selecting a premium model (like Claude 3.5 Sonnet).</li>
<li>If you've already exceeded the Pro plan's monthly 500-request limit, the API integration should restore "Fast" mode responses.</li>
<li>To confirm, check the "Usage" or "Billing" page on Anthropic Console to verify that token usage and corresponding costs are being recorded.</li>
</ul>
<p>That's it -- your Cursor and Claude API integration is complete. If your Cursor usage history shows entries under "User API Key," the connection is working correctly.</p>
<h2>How Pay-Per-Use API Integration Works and What to Watch Out For</h2>
<p>Here's a summary of the key concepts and precautions to know when using an external API like Claude API with pay-per-use pricing for the first time.</p>
<ul>
<li><strong>How pay-per-use works</strong>: Each time you use the Claude API through Cursor, charges are calculated based on the amount of text processed (token count) and automatically deducted from the credit balance you purchased on Anthropic Console.</li>
<li><strong>Watch your balance</strong>: When your credit balance runs out, you may lose the ability to use the API through Cursor (Fast mode becomes unavailable, or errors may occur). Check your balance regularly on Anthropic Console.</li>
<li><strong>Auto Reload</strong>: As mentioned, this feature is OFF by default. When enabled, it automatically charges your credit card when your balance drops below a specified amount. While convenient, it's recommended to keep it OFF (or set the amount carefully) until you understand your usage patterns to avoid unexpected charges.</li>
<li><strong>Stay cost-conscious</strong>: While API integration is technically simple, always keep in mind that you're "paying for what you use." Frequent large-scale code generation or multi-file refactoring can consume credits faster than expected. Regularly monitor your usage on Anthropic Console to stay on top of costs.</li>
</ul>
<p>Understanding these points will help you confidently leverage the Cursor and Claude API integration.</p>
<p>:::ad</p>
<h2>Real-World Costs After Claude API Integration and Alternatives</h2>
<h3>How Long Does $5 of Credit Last?</h3>
<p>After connecting Cursor with the Claude API, the natural question is: "How far does $5 of credit go?"</p>
<p>Based on my usage (frequent multi-file refactoring and moderately complex code generation), $5 of credit was consumed faster than expected. Roughly speaking, <strong>intensive usage could exhaust $5 in about 50 requests</strong>.</p>
<p>That's a heavy-use scenario. If you continued using the API at the same pace after exhausting the Pro plan's 500 monthly requests, API charges alone could reach $50 for another 500 requests. Of course, simple questions or short code fixes would cost much less (perhaps just a few dollars). (And if your usage is that light, you probably wouldn't exceed the 500-request limit in the first place.)</p>
<h3>Alternatives to Reduce Costs</h3>
<p>If Claude API integration costs more than expected, consider these alternatives:</p>
<ul>
<li><strong>Multiple Cursor account strategy</strong>: A bit of a workaround, but subscribing to Cursor Pro with a second account gives you a total of 1,000 premium model Fast requests per month. Depending on your API costs, this might actually be cheaper ($20 x 2 = $40/month).</li>
<li><strong>Other AI code editors and APIs</strong>: Editors like "<a href="https://www.cline.ai/">Cline</a>" can use models like Deepseek, which are reportedly much cheaper than the Claude API. For routine code generation or refactoring, these models may perform well enough while being more cost-effective.</li>
<li><strong>Hybrid strategy (mixing models)</strong>: If you have a separate ChatGPT Plus or Claude Pro subscription, use those for complex design discussions and high-level planning, then use Cursor (or other editors) with low-cost APIs for the actual coding work. For example, you could finalize the approach in Claude, then pass those instructions to an editor with Deepseek API integration for implementation -- balancing quality and cost.</li>
</ul>
<p>Consider these alternatives and find the approach that best fits your development style and budget.</p>
<h2>Summary: Maintaining Development Efficiency with Cursor and Claude API</h2>
<p>This article explained how to maintain fast response performance (Fast mode) and keep your development efficiency high by <strong>integrating the Claude API</strong> after reaching Cursor Pro plan's monthly 500 premium model request limit.</p>
<p>We covered the specific steps for creating an Anthropic Console account, obtaining an API key, purchasing credits, and configuring Cursor. We also discussed how pay-per-use API integration works, important considerations, real-world costs, and alternatives for reducing expenses.</p>
<p>For developers looking to further enhance their programming efficiency -- especially those who actively leverage AI assistants in their coding workflow -- Cursor's API integration is worth trying. Combine the methods and alternatives presented here according to your usage frequency and budget to build your optimal development environment.</p>
<p>:::ad</p>
<h2>Update: Slow Mode Might Actually Be Usable?</h2>
<p>Since publishing this article, I've continued using Cursor and made a new discovery. I tried turning OFF the API integration (pay-per-use) and using Cursor beyond the premium model limit in <strong>"Slow Mode"</strong> for a while.</p>
<p>The result? I found that <strong>"Slow Mode is actually more practical than expected."</strong></p>
<p>With many AI tools, "Slow Mode" typically means painfully slow responses that are barely usable. However, Cursor's Slow Mode, at least in my experience, <strong>didn't feel dramatically different from Fast Mode</strong>.</p>
<p>Of course, the quality and accuracy of generated code in Slow Mode are essentially the same as Fast Mode. So before switching to API pay-per-use, it's well worth <strong>trying Slow Mode first to see if the speed is genuinely bothersome</strong>.</p>
<p>That said, response times in Slow Mode may vary depending on the time of day and Cursor's server load.</p>
<p>This leads to a potential strategy: <strong>"Use Slow Mode by default and only temporarily enable pay-per-use when responses feel too slow."</strong> This could significantly reduce your API costs.</p>
<p>In conclusion, exceeding the Cursor Pro plan's 500 monthly requests doesn't necessarily mean you need to immediately switch to API pay-per-use. "Trying Slow Mode first" is a perfectly valid option.</p>
<hr>
<h2>Update 2: The Reality of Slow Mode and Gemini 2.5 Pro as a Powerful Alternative (April 2025)</h2>
<p>In my previous update, I mentioned that "Slow Mode is more usable than expected," but the situation has changed with extended use. One day, I encountered cases where <strong>Slow Mode responses took over 30 seconds</strong>. As expected, Slow Mode response times can fluctuate significantly depending on the time of day and server congestion.</p>
<p>In that situation, I temporarily enabled Claude API pay-per-use as suggested in my previous update, which restored fast responses. The pay-per-use option works well as a fallback when Slow Mode feels too sluggish.</p>
<p>However, after exhausting Cursor Pro's Fast mode (500 monthly requests), your options aren't limited to just enduring Slow Mode or Claude API pay-per-use. <strong>A powerful new alternative has recently emerged!</strong></p>
<p>It's Google's latest large language model, <strong>"Gemini 2.5 Pro Experimental,"</strong> announced on March 25, 2025. This high-performance model, which attracted significant attention immediately after its announcement, is now <strong>available directly within the Cursor editor!</strong></p>
<p>After testing it across several projects, the performance is impressively high -- <strong>comparable to or even surpassing existing premium models like Claude 3.5 Sonnet and GPT-4o</strong> in certain scenarios. The most notable feature of Gemini 2.5 Pro is its <strong>massive context window</strong> (the amount of information it can process and remember at once), which makes large-scale refactoring across multiple files remarkably smooth and accurate.</p>
<p>And the most noteworthy point as of now (April 3, 2025) is that <strong>"Gemini 2.5 Pro Experimental" is available for free on Cursor!</strong></p>
<p>If you've exhausted your 500 monthly Fast mode requests and are frustrated with response speeds, <strong>I strongly recommend selecting "Gemini 2.5 Pro Experimental" in Cursor's model settings and trying it before considering Claude API pay-per-use or other paid options.</strong></p>
<p>Despite being free, it delivers outstanding performance, and in many cases, this model alone may be more than sufficient. Give this new option a try.</p>
<p>:::ad</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/a800d81d00acb4119996a5428fa3b2aa/20250320-Cursor-Claude-API_01.png" medium="image"/></item><item><title><![CDATA[I Needed Dummy Images for UI Design, So I Built 'Sample Image Generator']]></title><description><![CDATA[When designing UIs for web or game development, you often need placeholder dummy images. I built 'Sample Image Generator' -- specify size, background color, and text to instantly create sample images.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/sample-image-generator/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/sample-image-generator/</guid><pubDate>Wed, 12 Mar 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>When designing UIs for web or game development, there are times when you just need a placeholder image. During the wireframing and prototyping stage, you often need dummy images of specific sizes.</p>
<p>But opening an image editor to create one is a hassle, and searching for free stock images takes time. I thought, "If only I could just specify a size and color and get one instantly" -- so I built a browser tool that generates sample images.</p>
<p><img src="/0783a449011b4a00f9fcf65c5c970680/20250312_sampleImageGenerator-thumb.png" alt=""></p>
<p>Just specify the size, background color, and text to instantly download a PNG image.</p>
<p>:::post-link{url="/en/tools/sample-image-generator/" text="▶︎ Sample Image Generator"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#overview">Why I Needed Dummy Images</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#usage">How to Use</a></li>
<li><a href="#format">Supported Formats</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Why I Needed Dummy Images {#overview}</h2>
<p>When working on UI design, there are plenty of situations where you need placeholder images. During the wireframing and prototyping stage, the actual images often don't exist yet.</p>
<p>But opening an image editor just for that is annoying. All you want is something simple, like "an 800x600 gray image."</p>
<p>That's why I built this tool -- just specify the size, background color, and text in your browser and generate a PNG image instantly.</p>
<p>Here are some examples of images you can create:</p>
<p><img src="/b0d958d7572ce15cc26dff0ff3071795/sample-image-800x600-1.png" alt=""></p>
<p><img src="/d6649b91e0787b52f6cadcbfcbfc2f42/sample-image-768x1500-1.png" alt=""></p>
<p><img src="/d1b0683cdd7ec48269ec72d7cbbdfc47/sample-image-2000x768-1.png" alt=""></p>
<h2>Features {#features}</h2>
<ul>
<li>Specify size, background color, text color, and text content in detail</li>
<li>Text visibility toggle</li>
<li>Automatic text sizing with center alignment</li>
<li>Instant preview and download of generated images</li>
<li>Wide selection of preset sizes and preset colors</li>
</ul>
<h2>How to Use {#usage}</h2>
<p><img src="/6b2922ba10087408e98753059d93ad41/sample-image-generator-steps.png" alt=""></p>
<ol>
<li>
<p><strong>Set the size</strong>
Choose from a preset or enter the width and height manually.</p>
</li>
<li>
<p><strong>Set the background color</strong>
Choose from a preset or pick any color from the color palette.</p>
</li>
<li>
<p><strong>Set the text</strong>
Toggle text on/off and specify the text content. Change the text color as needed.</p>
</li>
<li>
<p><strong>Preview &#x26; Download</strong>
The preview updates automatically as you change settings. Click the download button to get a PNG file.</p>
</li>
</ol>
<h2>Supported Formats {#format}</h2>
<p>Currently supports PNG export. It works on any browser that supports the canvas feature, regardless of whether you're on desktop or mobile.</p>
<p>:::post-link{url="/en/tools/sample-image-generator/" text="▶︎ Sample Image Generator"}</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/0783a449011b4a00f9fcf65c5c970680/20250312_sampleImageGenerator-thumb.png" medium="image"/></item><item><title><![CDATA[I Built a 'Catan Board Generator' That Automatically Creates Fair Game Maps]]></title><description><![CDATA[I got hooked on Catan and noticed the board layouts were often unbalanced, so I built a 'Catan Board Generator' that eliminates resource and number bias to instantly create fair game maps.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/catan-board-generator/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/catan-board-generator/</guid><pubDate>Sat, 08 Mar 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Have you heard of "Catan," the globally popular board game?</p>
<p>I don't usually play board games, but after playing with some old friends over the holidays, I was completely hooked before I knew it.</p>
<p><img src="/0b248b47e634e1e019687f9fc6dcc7ec/catan-board-8.png" alt=""></p>
<p>Catan has a simple premise: players place settlements on a board made of 19 resource tiles, roll two dice each turn, and any player with a settlement adjacent to a tile matching the dice total collects resources.</p>
<p>By spending resources to build roads and placing new settlements along those roads, you can collect even more resources.</p>
<p>What becomes critical here is the tile placement and number distribution. If the same type of resource tiles are adjacent, or if high-probability numbers (6 and 8) end up next to each other, certain players can dominate resource collection and the game balance falls apart.</p>
<p>To create a fairer game, tiles need to be carefully placed so that probabilities don't skew too heavily in any direction.</p>
<p>That's why I built the "Catan Board Generator" -- a tool that mathematically generates "fair tile layouts" in an instant.</p>
<p><strong>[June 2025 Update]</strong> v2.1.0 introduces <strong>resource balance visualization</strong> and <strong>improved generation quality</strong>! Building on v2.0.0's Expansion (5-6 player) support and UI improvements, even higher quality board generation is now possible.</p>
<p>:::post-link{url="/en/tools/catan-board-generator" text="▶︎ Catan Board Generator"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#overview">What Is the Catan Board Generator?</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#usage">How to Use</a></li>
<li><a href="#rules">Placement Rules in Detail</a></li>
<li><a href="#v2-0-0">v2.0.0 Expansion Support (2025/06/07)</a></li>
<li><a href="#v2-1-0">v2.1.0 Improved Fairness + Resource Balance Visualization (2025/06/12)</a></li>
<li><a href="#v2-2-0">v2.2.0 Balance Chaos Option (2025/06/14)</a></li>
<li><a href="#conclusion">Summary</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>What Is the Catan Board Generator? {#overview}</h2>
<p><img src="/d6b7d77209c27bff6db53a497495a21d/20250614-catan-board-generator2.1.0.webp" alt=""></p>
<p>This tool is a web app that automatically generates Catan boards.
It randomly places both resource tiles and number tokens at once, and you can also reshuffle just the numbers.
While following the official rules -- such as "keep 6 and 8 apart" and "avoid too many consecutive same resources" -- it aims to create reasonably balanced layouts.
When you find a board layout you like, you can save it as an image.</p>
<p><strong>v2.0.0 adds support for the Expansion (5-6 players) in addition to the Standard (3-4 players), along with major UI improvements.</strong></p>
<p>:::post-link{url="/en/tools/catan-board-generator" text="▶︎ Catan Board Generator"}</p>
<h2>Features {#features}</h2>
<ul>
<li>Supports both Standard and Expansion -- switch between 3-4 player (19 tiles) and 5-6 player (30 tiles)</li>
<li>Automatic shuffling of resource tiles and number tokens</li>
<li>"Shuffle Numbers Only" to keep the resource layout while changing numbers</li>
<li>Strict avoidance of adjacent 6 and 8</li>
<li>Advanced balancing to minimize probability bias</li>
<li>Two-column layout for greatly improved usability</li>
<li>Responsive design -- optimized for PC, tablet, and mobile</li>
<li>Save as a high-resolution image (2000x2000px)</li>
</ul>
<h2>How to Use {#usage}</h2>
<ol>
<li>Go to <a href="/en/tools/catan-board-generator" target="_blank">Catan Board Generator</a></li>
<li><strong>Select a board type</strong> -- Choose "Standard" or "Expansion (5-6 players)"</li>
<li>Click the <strong>Shuffle</strong> button to randomly place resource tiles + number tokens</li>
<li>To keep the resource layout but change the numbers, click the <strong>Shuffle Numbers Only</strong> button</li>
<li>To save as an image, click the <strong>Save Image</strong> button</li>
</ol>
<p><img src="/52f9866d131de98a7e1637d11b4f3fdc/20250607-catan-board-generator-settings.png" alt=""></p>
<h2>Placement Rules in Detail {#rules}</h2>
<p>This tool builds on the official rules while adding custom balancing algorithms for a fairer and more strategic gameplay experience.</p>
<h3>Basic Placement Rules</h3>
<ul>
<li><strong>No adjacent same resources:</strong> Same resource tiles (forest, pasture, hills, mountains, fields) are never placed next to each other</li>
<li><strong>No adjacent specific numbers:</strong> High-probability "6" and "8" as well as low-probability "2" and "12" are kept apart</li>
<li><strong>No duplicate numbers on same resource:</strong> The same number won't be assigned to the same resource type more than once</li>
</ul>
<h3>Advanced Balancing</h3>
<ul>
<li><strong>High-probability number distribution:</strong> "6" and "8" are distributed evenly across resource types to prevent concentration on a single resource</li>
<li><strong>Intersection expected value equalization:</strong> The value of "intersections" (where 3 tiles meet) is adjusted so that no area has an extremely low total probability</li>
<li><strong>Per-resource number quality:</strong> Prevents poor numbers (2, 3, 11, 12, etc.) from clustering on a single resource type, guaranteeing a minimum production expected value for each resource</li>
<li><strong>Probability distribution optimization:</strong> Fine-tunes each resource type's production probability to approach the ideal ratio</li>
</ul>
<h3>Display Details</h3>
<ul>
<li><strong>Number probabilities:</strong> The dots below each number token represent that number's probability (out of 36 possible dice combinations)</li>
<li><strong>High-probability highlighting:</strong> The particularly high-probability "6" and "8" are displayed in red</li>
<li><strong>Visual clarity:</strong> Each tile is color-coded by resource type for easy board comprehension at a glance</li>
</ul>
<h2>v2.0.0 Expansion Support (2025/06/07) {#v2-0-0}</h2>
<p><img src="/0a86d5f390d8162a8d4a686d1df547d6/20250607-catan-board-generator.webp" alt=""></p>
<p>In the June 2025 update, <strong>v2.0.0</strong> was released. Key improvements include:</p>
<h3>Expansion Features</h3>
<ul>
<li><strong>Tile count:</strong> Increased from 19 to 30 tiles</li>
<li><strong>Resource composition:</strong> Additional resources of each type enable more diverse strategies</li>
<li><strong>Number tokens:</strong> Additional numbers provide more placement options</li>
<li><strong>Desert tiles:</strong> Increased from 1 to 2</li>
</ul>
<h3>Other Improvements</h3>
<ul>
<li><strong>Two-column layout:</strong> On desktop, the control panel and board display are separated for much better usability</li>
<li><strong>Responsive design:</strong> Layout adapts to different device sizes</li>
<li><strong>Dynamic parameter adjustment:</strong> Generation algorithm parameters are automatically tuned based on board size</li>
<li><strong>Optimized search space:</strong> Expansion mode uses 2.3x more search steps for stable generation</li>
<li><strong>Performance improvements:</strong> Coordinate calculation caching and algorithm optimization</li>
<li><strong>Error handling:</strong> Proper feedback and retry functionality when generation fails</li>
</ul>
<p>This allows the Expansion board to be generated at <strong>the same quality level as the Standard version</strong>.</p>
<h2>v2.1.0 Improved Fairness + Resource Balance Visualization (2025/06/12) {#v2-1-0}</h2>
<p><img src="/d6b7d77209c27bff6db53a497495a21d/20250614-catan-board-generator2.1.0.webp" alt=""></p>
<p>In the June 12, 2025 update, <strong>v2.1.0</strong> was released. Key improvements include:</p>
<ul>
<li><strong>Resource balance visualization:</strong> Added a statistics section showing "resource production likelihood," scoring and displaying each resource's production expected value</li>
<li><strong>Improved fairness:</strong> Set minimum thresholds for per-resource total production expected values (Standard: 11, Expansion: 16) to prevent any resource from being extremely disadvantaged</li>
<li><strong>Faster processing:</strong> Introduced Web Workers to speed up and stabilize board generation calculations</li>
</ul>
<p>These improvements make it possible to more accurately evaluate generated board quality, enabling fairer and more strategic gameplay.</p>
<h2>v2.2.0 Balance Chaos Option (2025/06/14) {#v2-2-0}</h2>
<p><img src="/9a1140a676469f5bc45f3b9376036c89/20250614-v2.2.0-%E3%83%90%E3%83%A9%E3%83%B3%E3%82%B9%E5%B4%A9%E5%A3%8A%E3%82%AA%E3%83%97%E3%82%B7%E3%83%A7%E3%83%B3.webp" alt=""></p>
<p>In the June 14, 2025 update, v2.2.0 was released. Key improvements include:</p>
<ul>
<li><strong>Balance Chaos option:</strong> A new "Balance Chaos" option that intentionally allows resource and number bias. By default, the fairest possible board is generated, but enabling this option lets you create "chaotic boards" with extreme resource or number imbalances.</li>
<li><strong>Resource expected value balance:</strong> Adjustable minimum thresholds for each resource's production expected value, letting you choose the degree of imbalance.</li>
<li><strong>Resource adjacency control:</strong> Toggle whether same resources can be placed adjacently.</li>
<li><strong>Number adjacency control:</strong> Toggle whether high-probability or low-probability numbers can be placed adjacently.</li>
</ul>
<p>This allows the tool to accommodate not only balanced, fairness-focused play but also novelty games and advanced variant rules.</p>
<p>Use the "Balance Chaos" option to enjoy intentionally unfair boards or highly strategic layouts.</p>
<h2>Summary {#conclusion}</h2>
<p>The "Catan Board Generator" helps you avoid the problem of clustered 6s and 8s or biased resource tiles, letting you quickly determine your board layout.</p>
<p>v2.0.0 added Expansion support for 5-6 player games, along with UI improvements for better usability. v2.1.0 introduced resource balance visualization and improved generation quality for fairer, more strategic board generation.</p>
<p>It's perfect for cutting down setup time and getting straight to playing.</p>
<h3>Usage Notes</h3>
<p>This tool is designed to aim for a fair gaming experience, but the "optimality" of a board may vary depending on player strategy and preferences. The generation logic includes probabilistic elements, so please verify the final board balance yourself before playing.</p>
<p>:::post-link{url="/en/tools/catan-board-generator" text="▶︎ Catan Board Generator"}</p>
<p>May your Catan experience be smoother and more enjoyable!</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/d6b7d77209c27bff6db53a497495a21d/20250614-catan-board-generator2.1.0.webp" medium="image"/></item><item><title><![CDATA[Replacing a Failed Fan on the Synology NAS DS218play]]></title><description><![CDATA[Step-by-step guide to replacing a failed fan on the Synology NAS DS218play]]></description><link>https://uhiyama-lab.com/en/blog/dialy/synology-ds218play-fan-replacement/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/dialy/synology-ds218play-fan-replacement/</guid><category><![CDATA[hardware]]></category><pubDate>Tue, 11 Feb 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>I run several <strong>NAS (Network Attached Storage)</strong> units to manage the massive amounts of data from video production and game development. Being able to instantly share assets between multiple PCs over the network is incredibly convenient.</p>
<p>One of my older units, the Synology NAS DS218play, recently started having its fan stop periodically, so I purchased a spare part and replaced it. Here's a record of the process.</p>
<hr>
<h2>NAS Unit + Spare Parts</h2>
<ul>
<li>NAS unit: <a href="https://amzn.to/419jDve">Synology NAS Kit 2-Bay DS218play Quad-Core CPU 1GB RAM - Amazon</a></li>
<li>Spare fan: <a href="https://amzn.to/4jM6tvH">Synology Spare Parts - Replacement Fan FAN_92x92x25_1 (92mm fan, 25mm thick)</a></li>
</ul>
<hr>
<h2>The Problem: Fan Repeatedly Stopping and Restarting</h2>
<p>The issue was that the fan would stop a short while after powering on the Synology NAS, then restart after a few seconds, only to stop again -- repeating this cycle. Warnings like the following were being sent every few minutes:</p>
<p><img src="/2e3fab709e8221962309d2a1e01e700c/001.png" alt=""></p>
<p>Cleaning the fan didn't resolve the issue, so I purchased a spare part.</p>
<hr>
<h2>Replacing with the Spare Part</h2>
<p><img src="/df0d4f8ea21cefbeb5bbe117feb42f0b/002.jpeg" alt=""></p>
<p>The spare fan arrived. I purchased it from the official Synology seller on Amazon.</p>
<p><img src="/6b17177cf67c9e94f6885f405b5bc9e7/003.jpeg" alt=""></p>
<p>Remove the screws from the Synology NAS.</p>
<p><img src="/1ddcb5e01c0345e2a66578218a4d5198/004.jpeg" alt=""></p>
<p>The fan area has a very simple structure. Disconnect the power cable and peel off the silver label.</p>
<p><img src="/d5c7491c315bd564aea33ce07b9be973/005.jpeg" alt=""></p>
<p>The fan mounting method varies by Synology NAS model.</p>
<p>For example, the DS218 has the fan secured with screws from the back, but the DS218play uses rubber grommets on the inside like this.</p>
<p>I couldn't find a system maintenance page specifically for the DS218play, but you can check the replacement procedure for the DS223j at the link below. The process is nearly identical for the DS218play.</p>
<p><a href="https://kb.synology.com/ja-jp/HIGs/DS223j_HIG/4">DS223j Product Manual 4.1 Replacing a Failed Fan - Synology Knowledge Center</a></p>
<p><img src="/3b1b0b5c4fdb543e3d8e3ca75080dc6b/006.jpg" alt=""></p>
<p>Remove the four screws on the bottom of the NAS, then remove the rubber grommets in the lower area that are out of finger reach, and swap in the spare fan.</p>
<p><img src="/a3835931b8915b988faff8d9dd3254dc/007.jpg" alt=""></p>
<p>Installation complete. The fan is now running normally again.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/083023f3bd887a9f79d91b64550c867e/20250211-Thumb_Synology%E3%83%95%E3%82%A1%E3%83%B3%E4%BA%A4%E6%8F%9B_%E3%82%B5%E3%83%A0%E3%83%8D%E3%82%A4%E3%83%AB.jpg" medium="image"/></item><item><title><![CDATA[I Built a 'Progress Checksheet Maker' to Stay Motivated While Learning]]></title><description><![CDATA[Thick textbooks, long online courses -- it's hard to stay motivated when you can't see your progress. I built a 'Progress Checksheet Maker' that lets you print a checksheet and track your progress with a pen.]]></description><link>https://uhiyama-lab.com/en/blog/tooldev/tools-progress-checksheet/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/tooldev/tools-progress-checksheet/</guid><pubDate>Sun, 02 Feb 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Thick textbooks, long online courses -- we've all started learning something new only to struggle to see it through to the end. When you can't see your daily progress, it's hard to stay motivated.</p>
<p>That's where an analog "progress checksheet" can be surprisingly effective. By printing it on paper and physically checking off items with a pen, you get that tangible feeling of "I worked hard today!" and "Almost there!"</p>
<p>So I built the "Progress Checksheet Maker" -- a browser tool that lets you create and print checksheets with ease.</p>
<p><img src="/8945c1e078474d4cde312740b7cb1e23/progress-checksheet.png" alt=""></p>
<p>It's useful for all kinds of goals -- studying for certification exams, tracking reading progress, managing online course completion, and more.</p>
<p>:::post-link{url="/en/tools/progress-checksheet/" text="▶︎ Progress Checksheet Maker (Printable)"}</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#overview">The Struggle to Stay Motivated</a></li>
<li><a href="#features">Features</a></li>
<li><a href="#usage">How to Use</a></li>
<li><a href="#benefits">Ideas for Use</a></li>
<li><a href="#tech">Runs Entirely in Your Browser</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>The Struggle to Stay Motivated {#overview}</h2>
<p><img src="/dc1fd055a8b1513f4e6201eebd6d9b9e/%E9%80%B2%E6%8D%97%E3%83%81%E3%82%A7%E3%83%83%E3%82%AF%E3%82%B7%E3%83%BC%E3%83%88%E3%83%A1%E3%83%BC%E3%82%AB%E3%83%BC.webp" alt=""></p>
<p>When working through textbooks, workbooks, or online courses, it's tough to stay motivated when you can't see your progress over time.</p>
<p>There are digital tools for tracking progress, but the analog act of "printing it on paper and checking things off with a pen" can surprisingly boost your sense of accomplishment and motivation to keep learning.</p>
<p>That's why I created this tool -- just enter a cover image and start/end page numbers to automatically generate an A4 landscape checksheet.</p>
<p>:::post-link{url="/en/tools/progress-checksheet/" text="▶︎ Progress Checksheet Maker (Printable)"}</p>
<h2>Features {#features}</h2>
<ul>
<li>Create checksheets easily with just an image and page numbers</li>
<li>Set any image as the cover -- a textbook cover, course thumbnail, etc. (or go without an image)</li>
<li>Specify start and end pages to create a checksheet for just the range you need</li>
<li>Optimized for A4 landscape printing (3508x2480 pixels)</li>
<li>Drag &#x26; drop to set the cover image easily</li>
<li>All image processing happens in the browser -- no data is sent to a server</li>
<li>Checkbox grid size and count are automatically optimized based on total page count</li>
<li>Download the generated checksheet as a PNG image instantly</li>
<li>Physically checking off items helps you feel your progress and stay motivated</li>
</ul>
<h2>How to Use {#usage}</h2>
<p>The basic flow is three steps:</p>
<ol>
<li><strong>Select a cover image (optional)</strong></li>
</ol>
<p>Drag &#x26; drop an image file (like a textbook cover) or click the area to select one. It's fine to skip this step.</p>
<ol start="2">
<li><strong>Enter page numbers</strong></li>
</ol>
<p>Enter the "Start page" and "End page." For example, to track pages 10 through 150, enter "10" for the start and "150" for the end. The preview updates automatically.</p>
<ol start="3">
<li><strong>Download the image</strong></li>
</ol>
<p>Just click the "Download Image" button. A PNG image will be downloaded, ready to print.</p>
<h2>Ideas for Use {#benefits}</h2>
<p>Printing the checksheet and checking off items with a pen has several benefits:</p>
<ul>
<li>Checking off items one by one gives you a visible sense of progress and accomplishment</li>
<li>Knowing "I've come this far" or "just a little more to go" provides concrete motivation to keep learning</li>
<li>Posting the sheet on a wall or keeping it in a planner keeps your learning goals visible, helping build habits</li>
<li>Use task counts instead of page numbers to create checksheets for 30-day challenges and similar goals</li>
<li>Recording progress with pen and paper can help improve focus and reduce digital fatigue</li>
</ul>
<p>It works for all kinds of things -- managing children's workbook progress, tracking hobby projects like knitting or crafts, logging long TV series, and more.</p>
<h2>Runs Entirely in Your Browser {#tech}</h2>
<p>This tool runs entirely in your browser. No special software installation is required.</p>
<p>Cover image processing and checksheet rendering all happen within the browser. Your selected image data is never sent to any external server, so you can use it with peace of mind.</p>
<p>:::post-link{url="/en/tools/progress-checksheet/" text="▶︎ Progress Checksheet Maker (Printable)"}</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/dc1fd055a8b1513f4e6201eebd6d9b9e/%E9%80%B2%E6%8D%97%E3%83%81%E3%82%A7%E3%83%83%E3%82%AF%E3%82%B7%E3%83%BC%E3%83%88%E3%83%A1%E3%83%BC%E3%82%AB%E3%83%BC.webp" medium="image"/></item><item><title><![CDATA[Deploy a Next.js Site to Cloudflare Pages: A Complete Guide to Custom Domain Setup]]></title><description><![CDATA[A step-by-step guide to deploying a Next.js site to Cloudflare Pages and setting up a custom domain]]></description><link>https://uhiyama-lab.com/en/blog/webdev/nextjs-cloudflare-custom-domain/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/webdev/nextjs-cloudflare-custom-domain/</guid><category><![CDATA[cloudflare]]></category><category><![CDATA[nextjs]]></category><pubDate>Wed, 29 Jan 2025 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>This article is a complete step-by-step guide for beginners on how to <strong>deploy</strong> a website built with the popular framework <strong>Next.js</strong> to the feature-rich hosting service <strong>Cloudflare Pages</strong>, and then set up your own <strong>custom domain</strong> (e.g., <code>your-domain.com</code>).</p>
<p>While Vercel (the creators of Next.js) is well-known as a deployment target, <strong>Cloudflare Pages</strong> is also an excellent option. It offers a <strong>fast CDN</strong>, powerful <strong>security features</strong>, and <strong>generous free-tier limits</strong>, making it a great choice for everything from personal projects to small and mid-sized websites. In this guide, I'll walk you through the deployment process I frequently use with Cloudflare Pages.</p>
<p><em>Note: This guide assumes your Next.js project code is already pushed to a GitHub (or GitLab) repository (private repositories are fine).</em></p>
<hr>
<p><strong>Steps covered in this article</strong></p>
<ol>
<li><a href="#cloudflare-setup">Add a Next.js project to Cloudflare Pages</a></li>
<li><a href="#private-repo">Grant access to a private GitHub repository</a></li>
<li><a href="#build-config">Build configuration and tips (Node.js version, compatibility flags)</a></li>
<li><a href="#custom-domain">Set up a custom domain on Cloudflare Pages (nameserver changes)</a></li>
<li><a href="#summary">Summary: Next.js + Cloudflare Pages + Custom Domain Setup Flow</a></li>
</ol>
<hr>
<p>:::ad</p>
<h2>Add a Next.js Project to Cloudflare Pages</h2>
<p>The first step is to create a new Pages project in the Cloudflare dashboard and link it to the GitHub repository containing your Next.js project.</p>
<ol>
<li>Log in to the Cloudflare dashboard and select <strong>Workers &#x26; Pages</strong> from the left menu. (<em>Note: Cloudflare's UI is frequently updated, so menu names may vary slightly. Look for similar items.</em>)</li>
</ol>
<p><img src="/dfa43011604702c7a72dafa125429d10/001-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="2">
<li>On the overview screen, find and click the <strong>Create application</strong> button.</li>
</ol>
<p><img src="/4535771dc6d1e2b39e594487e1edc031/002-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="3">
<li>On the "Create an application" screen, make sure the <strong>Pages</strong> tab is selected and click the <strong>Connect to Git</strong> button. Cloudflare Pages primarily uses Git repository integration for automatic deployments.</li>
</ol>
<p><img src="/d95690e92ea9a31768e4788a9132f368/003-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="4">
<li>
<p>You'll be taken to the GitHub or GitLab account connection screen. If this is your first time connecting, click "Add account" and follow the prompts to authenticate your GitHub account and authorize Cloudflare access.</p>
</li>
<li>
<p>Once the account is connected, the "Select a repository" section will display a list of Git repositories from your linked account. Select the repository containing the Next.js project you want to deploy.</p>
</li>
</ol>
<p><img src="/30346b498642a2b32802bd0b8c0629a9/004-20250129-NextjsCloudflare.png" alt=""></p>
<p>:::ad</p>
<h2>Grant Access to a Private GitHub Repository</h2>
<p><strong>[Tip]</strong> If the repository you want to deploy is set to private, it may not appear in the repository list above. This happens because the Cloudflare Pages application doesn't have permission to access that private repository. In that case, you'll need to grant access on the GitHub side using the following steps.</p>
<ol>
<li>Log in to GitHub, click your profile icon in the top right, open <strong>Settings</strong>, and select <strong>Applications</strong> from the left menu.</li>
</ol>
<p><img src="/8ce05f133841e1c988705f4cdd919e98/005-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="2">
<li>Select the <strong>Installed GitHub Apps</strong> tab and find and click <strong>Cloudflare Pages</strong> (or a similar name) in the list.</li>
</ol>
<p><img src="/062e3f0f72b0df23edb2c6072ea27dfb/006-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="3">
<li>Scroll to the <strong>Repository access</strong> section. Select the <strong>Only select repositories</strong> option, then check the repository for the Next.js project you want to deploy to Cloudflare Pages. Click <strong>Save</strong> to save your changes.</li>
</ol>
<p><strong>[Security Recommendation]</strong> Even if "All repositories" is currently selected, it's recommended to switch to "Only select repositories" and limit access to only the repositories you need, for better security.</p>
<p><img src="/e59fb5fa7a6636270d106b9d6a2095c7/007-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="4">
<li>After completing the GitHub settings, return to the Cloudflare Pages screen and refresh the repository list (or reload the page). The private repository you just granted access to should now appear. Select it and click <strong>Begin setup</strong> to proceed.</li>
</ol>
<p><img src="/d4c3a78a7601d2caf4e773694ea7f977/008-20250129-NextjsCloudflare.png" alt=""></p>
<p>:::ad</p>
<h2>Build Configuration and Tips (Node.js Version, Compatibility Flags)</h2>
<p>After selecting a repository, you'll see the build and deployment settings screen. This is where you configure how Cloudflare Pages fetches code from the GitHub repository, builds it, and deploys it.</p>
<ol>
<li>In the "Build settings" section, open the <strong>Framework preset</strong> dropdown menu and select <strong>Next.js</strong>. Cloudflare Pages supports many frameworks, and selecting a preset automatically fills in the optimal build commands and settings for that framework.</li>
</ol>
<p><img src="/e0bd3eb315bf52c825dd4de95d1700de/009-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="2">
<li>After selecting the preset, the "Build command" (e.g., <code>npm run build</code> or <code>next build</code>) and "Build output directory" (e.g., <code>.next</code> or <code>.vercel/output/static</code>, which may vary depending on your Next.js version and configuration) will be automatically populated. Usually, you can leave these as-is.</li>
</ol>
<p><img src="/19e595450239f29de19068d3773fa45e/010-20250129-NextjsCloudflare.png" alt=""></p>
<p><strong>[Note 1] When you need to specify the Node.js version</strong></p>
<p>The basic setup is complete at this point, but clicking "Save and Deploy" may result in build errors. A common cause is when <strong>the Node.js version used in Cloudflare Pages' build environment is older than what your Next.js project requires</strong>.</p>
<p>Check the build logs. If you see error messages like the following, you need to specify the Node.js version:</p>
<pre><code>Initializing build environment. Failed: error finding node version '>=18.0.0'
You are using Node.js 18.17.1.
For Next.js, Node.js version "^18.18.0 || ^19.8.0 || >= 20.0.0" is required.
</code></pre>
<p><em>(The version numbers in the error message may differ.)</em></p>
<p>To fix this, explicitly specify the Node.js version in your Cloudflare Pages project settings.</p>
<p>Go to your project's <strong>Settings</strong> > <strong>Environment variables</strong>, and in the "Build environment variables" section (or both "Production" and "Preview" environment variables), click <strong>Add variable</strong> and configure it as follows:</p>
<ul>
<li><strong>Variable name:</strong> <code>NODE_VERSION</code></li>
<li><strong>Value:</strong> The Node.js version required by your Next.js project (e.g., <code>18.18.0</code> or <code>20.0.0</code>. Match it to the error message, Next.js documentation, or your local development environment). If you have it specified in the <code>engines</code> field of your <code>package.json</code>, use that value.</li>
</ul>
<p><img src="/cc174ca0b081ed64e92e5181dc8f8e02/011-20250129-NextjsCloudflare.png" alt=""></p>
<p>After setting the environment variable, re-deploy (e.g., click "Retry deployment" from the build log screen). The specified Node.js version will be used and the build should complete successfully.</p>
<p><strong>[Note 2] When you need to set the Node.js compatibility flag</strong></p>
<p>If the build succeeds but you see the following error screen when visiting the deployed site (<code>xxx.pages.dev</code>):</p>
<p><img src="/be406fbe2f5ac591d6aba15a59891ce2/012-20250129-NextjsCloudflare.png" alt=""></p>
<p>This is called a <strong>Node.JS Compatibility Error</strong>. It means you need to enable Node.js API compatibility mode for the Cloudflare Pages runtime environment (Cloudflare Workers) to correctly run your Next.js application (especially server-side features and API routes).</p>
<p>To fix this, configure the following setting:</p>
<p>Go to your project's <strong>Settings</strong> > <strong>Functions</strong> > <strong>Compatibility flags</strong> section (or a similar item). Click <strong>Configure</strong> (or "Add"), add the <strong><code>nodejs_compat</code></strong> flag, and save. (Typically, set this for both production and preview environments.)</p>
<p><img src="/8d17cf2c3f6e6a3b595816dda0ea6bb1/013-20250129-NextjsCloudflare.png" alt=""></p>
<p>After setting this flag and redeploying, your Next.js application should display correctly when you visit the site.</p>
<p>:::ad</p>
<h2>Set Up a Custom Domain on Cloudflare Pages (Nameserver Changes)</h2>
<p>Once your build and deployment succeed and the site is accessible at a URL like <code>project-name.pages.dev</code>, the final step is to assign your own <strong>custom domain</strong> (e.g., <code>your-cool-site.com</code>) to this Cloudflare Pages project.</p>
<p>Here's the general flow for setting up a custom domain:</p>
<ol>
<li><strong>Cloudflare Pages configuration:</strong> Add the custom domain name you want to use in the project settings.</li>
<li><strong>Note the nameserver information:</strong> Cloudflare will provide the nameserver addresses to configure for your domain.</li>
<li><strong>Domain registrar configuration:</strong> In the management panel of the service where you registered your domain (e.g., Namecheap, Google Domains, Route 53, etc.), change the domain's nameserver settings to the ones provided by Cloudflare.</li>
<li><strong>Wait for DNS propagation:</strong> Wait for the nameserver change information to propagate across the internet. This usually takes a few minutes to a few hours, but can take up to about 48 hours.</li>
<li><strong>Activate on Cloudflare Pages:</strong> Once Cloudflare confirms the nameserver change, the domain status becomes "Active," and your Pages project is connected to your custom domain.</li>
</ol>
<p>Below is a step-by-step example. (The basic flow is the same regardless of your registrar.)</p>
<h3>Step 1: Add a Custom Domain to Cloudflare Pages</h3>
<ol>
<li>Go to your Cloudflare Pages project, select the <strong>Custom domains</strong> tab, and click the <strong>Set up a custom domain</strong> button.</li>
</ol>
<p><img src="/b9795d9a343feb37b13ec70da802e18d/014-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="2">
<li>Enter your custom domain name (e.g., <code>your-cool-site.com</code>) in the text box and click "Continue."</li>
</ol>
<p><img src="/b9795d9a343feb37b13ec70da802e18d/015-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="3">
<li>If the domain isn't yet managed by Cloudflare, the process of transferring (or partially delegating) DNS management to Cloudflare will begin. Follow the on-screen instructions and click buttons like <strong>Begin DNS transfer</strong> or "Add site." (<em>Note: This doesn't transfer the domain itself to Cloudflare -- it only points DNS management to Cloudflare.</em>)</li>
</ol>
<p><img src="/ac3508f4e68657ddf8f8ecdc62b8a7e4/016-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="4">
<li>Enter the domain name and proceed to the step where Cloudflare scans existing DNS records. Click "Continue."</li>
</ol>
<p><img src="/8c78667cf2cb92208782b0618de6a29f/017-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="5">
<li>A Cloudflare plan selection screen will appear. For basic custom domain usage, the <strong>Free plan</strong> at the bottom of the screen is typically sufficient. Select it and click "Continue."</li>
</ol>
<p><img src="/33b4685caa79f0a97ee18cbee96c605c/018-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="6">
<li>Cloudflare will display any existing DNS records it found during the scan. Review the contents (it's fine to leave them as-is if you're unsure) and click "Continue."</li>
</ol>
<p><img src="/388192ac00a0cc2bc9ce43b915e4743e/019-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="7">
<li>If a confirmation popup appears, click "Confirm."</li>
</ol>
<p><img src="/616b184bafe178e4d6dc95de1aa6299f/020-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="8">
<li><strong>[Important]</strong> This screen will display the <strong>Cloudflare nameserver addresses (usually two)</strong> that you need to set for your domain. Copy or note these addresses accurately (e.g., <code>ada.ns.cloudflare.com</code>, <code>ben.ns.cloudflare.com</code>). You'll need them in the next step.</li>
</ol>
<p><img src="/946c58750fc180221415b5f54c182516/021-20250129-NextjsCloudflare.png" alt=""></p>
<h3>Step 2: Change Nameservers at Your Domain Registrar</h3>
<p>Next, log in to the management panel of the service where you registered your domain (domain registrar) and change the nameserver settings.</p>
<ol>
<li>
<p>Go to your domain registrar's website and log in.</p>
</li>
<li>
<p>In the domain management menu, find the "Nameserver settings" or "DNS settings" option for your target domain and open the editing screen.</p>
</li>
<li>
<p>Remove or change the current nameserver settings (usually set to the registrar's own nameservers) and enter the <strong>two nameserver addresses</strong> provided by Cloudflare. Save the changes.</p>
</li>
</ol>
<p><img src="/e307fad0e4a297adc6aecb0f0b2ff974/022-20250129-NextjsCloudflare.png" alt=""></p>
<p><strong>[Note] DNS propagation takes time!</strong></p>
<p>It takes time for nameserver changes to propagate across the internet. While it can sometimes take effect within minutes, it usually takes <strong>several hours</strong> and can take up to <strong>about 48 hours</strong> in some cases. During this time, your custom domain may not be accessible or may show outdated information. Be patient and wait. Check the status periodically on the Cloudflare dashboard or wait for the confirmation email from Cloudflare.</p>
<p><img src="/b88980c78926d23a5926b41010c93998/023-20250129-NextjsCloudflare.png" alt=""></p>
<h3>Step 3: Activate the Domain on Cloudflare Pages</h3>
<p>Once Cloudflare recognizes the nameserver change, the domain status on the Cloudflare dashboard will change from "Pending" to <strong>Active</strong>. (In my case, this was relatively quick and took about 5 minutes.)</p>
<p><img src="/29322942e66e1233f7cc1b7f0533c45d/024-20250129-NextjsCloudflare.png" alt=""></p>
<p>Once the domain is active, it's time to formally connect your Cloudflare Pages project to your custom domain.</p>
<ol>
<li>
<p>Go back to the Cloudflare Pages project settings and open the <strong>Custom domains</strong> tab.</p>
</li>
<li>
<p>Click the "Set up a custom domain" button, re-enter your custom domain name, and click "Continue."</p>
</li>
<li>
<p>This time, since Cloudflare recognizes the domain, you should see a screen like "Activate domain" or "Verify DNS records." Cloudflare Pages will automatically set (or suggest) the required DNS records (usually a CNAME record). Review the details and click the <strong>Activate domain</strong> button.</p>
</li>
</ol>
<p><img src="/173482d29608e1eeabb5c882ed88174b/025-20250129-NextjsCloudflare.png" alt=""></p>
<ol start="4">
<li>The status will show "Initializing" or "Verifying," and after a short wait, it will change to <strong>Active</strong>. The setup is now complete!</li>
</ol>
<p>Once the setup is done, visiting your custom domain (e.g., <code>https://your-cool-site.com</code>) will display the Next.js site deployed on Cloudflare Pages. (SSL certificates are automatically issued and managed by Cloudflare.)</p>
<p><img src="/1289bdf01e20fc6fd9cc9d4ca2418514/026-20250129-NextjsCloudflare.png" alt=""></p>
<p>:::ad</p>
<h2>Summary: Next.js + Cloudflare Pages + Custom Domain Setup Flow</h2>
<p>That covers the complete process of deploying a <strong>Next.js</strong> site to <strong>Cloudflare Pages</strong> and setting up a <strong>custom domain</strong>. Let's recap the overall flow:</p>
<ol>
<li><strong>GitHub preparation:</strong> Push your Next.js project to a GitHub repository.</li>
<li><strong>Cloudflare Pages integration:</strong> Create a project on Cloudflare Pages and link it to the GitHub repository. (For private repositories, configure access permissions on the GitHub side.)</li>
<li><strong>Build settings and troubleshooting:</strong>
<ul>
<li>Select "Next.js" in the framework preset.</li>
<li>If needed, set the <code>NODE_VERSION</code> environment variable to specify the Node.js version.</li>
<li>If needed, set the <code>nodejs_compat</code> compatibility flag.</li>
</ul>
</li>
<li><strong>Custom domain setup:</strong>
<ul>
<li>Add a custom domain to Cloudflare Pages and note the Cloudflare nameservers provided.</li>
<li>In your domain registrar's management panel, change the nameservers to Cloudflare's.</li>
<li>Wait for DNS propagation, and once the domain is active on Cloudflare, activate the domain in the Pages project.</li>
</ul>
</li>
</ol>
<p>While there are a few things to watch out for -- like Node.js-related build errors and DNS propagation wait times -- the process itself is fairly straightforward. Cloudflare Pages offers fast delivery, strong security, and generous free-tier limits, making it an excellent hosting choice for Next.js applications. Give it a try!</p>
<p>(<em>This article covers the basic deployment steps. Depending on your application's requirements, you may need additional configuration such as environment variables, custom build commands, redirect rules, and more. Refer to the official Cloudflare Pages documentation for further details.</em>)</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/dfa43011604702c7a72dafa125429d10/001-20250129-NextjsCloudflare.png" medium="image"/></item><item><title><![CDATA[Brute-Forcing a 190,000 Polygon VeryPoor VRChat Avatar Down to 70,000 Polygons]]></title><description><![CDATA[A hands-on log of reducing a VRChat avatar from 188,784 to 69,995 polygons (65% reduction) using MantisLODEditor and MeshDeleter]]></description><link>https://uhiyama-lab.com/en/blog/dialy/vrchat-avatar-optimization/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/dialy/vrchat-avatar-optimization/</guid><category><![CDATA[vrchat]]></category><pubDate>Sun, 22 Dec 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Sometimes your VRChat avatar ends up with a "Very Poor" performance rank. Very Poor avatars may be hidden in certain worlds or cause performance issues for other users, so optimization is generally recommended where possible.</p>
<p>In this post, I used the Unity tools "Mantis LOD Editor" and "MeshDeleterWithTexture" to reduce my avatar's polygon count, successfully improving the performance rank from "Very Poor" to "Poor." Here's a record of the process and methods used.</p>
<p>Optimization Results: Before and After</p>
<p>Before: 188,784 polygons / Performance Rank: Very Poor</p>
<p><img src="/04e252ad57f1cd36e48091df825367e8/20241216-1-VRC%E6%9C%80%E9%81%A9%E5%8C%96-%E4%BD%9C%E6%A5%AD%E5%89%8D.webp" alt="Avatar before optimization (Very Poor rank)"></p>
<p>After: 69,995 polygons / Performance Rank: Poor</p>
<p><img src="/bddc141e1028e8f54846e32d8ea8abed/20241216-2-VRC%E6%9C%80%E9%81%A9%E5%8C%96-%E4%BD%9C%E6%A5%AD%E5%BE%8C.webp" alt="Avatar after optimization (Poor rank)"></p>
<p>As shown, the polygon count was reduced by 63% while keeping visual impact to a minimum.</p>
<p><strong>[Before you read: a note on expectations]</strong></p>
<p>This article documents the process of forcibly reducing a ~190,000 polygon high-end model to under 70,000. Honestly, after going through the process, I questioned whether this level of reduction was really necessary.</p>
<p>In practice, keeping a separate lightweight avatar for performance-critical situations is more realistic. That said, this guide is useful as a reference for understanding polygon reduction techniques, or for more modest optimizations (e.g., 100K to 70K polygons). Read it as a learning experience!</p>
<p>Here are the specific methods I used to effectively reduce polygon count. These techniques are also valuable for avatar customization and original model creation:</p>
<ul>
<li>Removing hidden parts: Deleting body mesh that's invisible under clothing (e.g., elbows)</li>
<li>Mantis LOD Editor reduction: Reducing polygon counts on individual parts (clothes, hair, accessories) to the limit before visual quality breaks down</li>
<li>MeshDeleterWithTexture partial deletion: Removing unnecessary clothing decorations or hidden mesh by specifying texture regions</li>
<li>(Advanced) Part separation with MeshDeleter: Splitting combined parts (e.g., inner + outer clothing as one object) to optimize them individually</li>
</ul>
<p>This article walks through each step in order.</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#tools">Required Tools and Setup</a></li>
<li><a href="#terminology">Useful Terminology</a></li>
<li><a href="#mantis-basics">Polygon Reduction with Mantis</a></li>
<li><a href="#mantis-limitations">Cases Where Mantis Alone Isn't Enough</a></li>
<li><a href="#meshdeleter">Partial Deletion with MeshDeleter</a></li>
<li><a href="#advanced">Advanced: Separating Clothing Parts</a></li>
<li><a href="#summary">Optimization Results and Key Takeaways</a></li>
<li><a href="#references">References</a></li>
<li><a href="#appendix">Addendum: Considerations for High-End Models</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Required Tools and Setup {#tools}</h2>
<p>Here are the main tools used for this VRChat avatar optimization (polygon reduction):</p>
<ul>
<li>
<p><strong><a href="https://assetstore.unity.com/packages/tools/modeling/mantis-lod-editor-professional-edition-37086?locale=ja-JP">Mantis LOD Editor - Professional Edition</a></strong> (Unity Asset Store / Paid: ~$55)</p>
<ul>
<li>A well-established Unity asset with high-quality polygon reduction (decimation) capabilities. Efficiently reduces polygon counts on individual avatar parts. After purchase, import it via Unity's PackageManager > My Assets.</li>
</ul>
</li>
<li>
<p><strong><a href="https://booth.pm/ja/items/5409262">Non-Destructive Polygon Reduction: NDMF Integration for Mantis LOD Editor</a></strong> (Booth: by Hitsubu / Free *Requires Mantis LOD Editor)</p>
<ul>
<li>A tool that lets you use Mantis LOD Editor within the "NDMF" framework, widely used for VRChat avatar modification, in a safer and simpler (non-destructive) way. Download from Booth and import the unitypackage. This is what you'll actually apply to your avatar.</li>
</ul>
</li>
<li>
<p><strong><a href="https://booth.pm/ja/items/1501527">MeshDeleterWithTexture beta</a></strong> (Booth: by Gatosyocora / Free)</p>
<ul>
<li>An incredibly useful Unity editor extension that lets you delete mesh regions by painting on a texture image. Ideal for removing fine clothing decorations or hidden mesh that Mantis can't easily handle. Download from Booth and import the unitypackage.</li>
</ul>
</li>
<li>
<p><strong>anatawa12's VRC Avatar Tools (formerly: gists pack)</strong> (Install via VCC)</p>
<ul>
<li>A toolkit that includes detailed avatar performance statistics. The "<strong>Actual Performance</strong>" tab is <strong>essential</strong> for monitoring current polygon counts and performance rank in real-time during optimization. Easily added via VCC (VRChat Creator Companion) under "Manage Project."</li>
</ul>
</li>
</ul>
<p><strong>[Important] How to check with the Actual Performance tab:</strong></p>
<p>After installing anatawa12's VRC Avatar Tools, a new "Avatars" tab appears in the VRChat SDK Control Panel with an "Actual Performance" section. After applying Mantis or MeshDeleter optimizations, <strong>enter Play Mode in Unity</strong> to update these stats and confirm your current polygon count and performance rank. The goal is to get the "Polygons" value <strong>below 70,000</strong> (the Poor rank threshold).</p>
<p><img src="/4ee413c875f7ee17a7c7e58982d8d11e/20241216-3-anatawa.png" alt="Adding anatawa12&#x27;s VRC Avatar Tools via VCC"></p>
<p><img src="/d3b4e47773d1c5ab82e8a7fe7db48b41/20241216-4-ActualPerformance.png" alt="Checking polygon count and rank in the Actual Performance tab"></p>
<p>(In this example: 188,784 polygons = Very Poor. Target: under 70,000!)</p>
<h2>Useful Terminology {#terminology}</h2>
<p>Here are some terms that will help you understand the optimization process:</p>
<ul>
<li>
<p><strong>Non-Destructive</strong>
An editing approach that doesn't modify the original data directly. The NDMF version of Mantis and MeshDeleter both operate non-destructively, preserving the original mesh data. This means that if you make a mistake or don't like the results, you can simply remove the tool's component or revert the settings to <strong>easily restore the original state</strong>. This is a huge advantage that makes it safe for beginners to experiment.</p>
</li>
<li>
<p><strong>NDMF (Non-Destructive Modular Framework)</strong>
A framework for VRChat avatar modification. It manages various avatar settings and modifications as "components" that are automatically applied at build time. It pairs well with non-destructive workflows, and many useful tools support NDMF. The "NDMF Integration for Mantis LOD Editor" used in this guide is one of them.</p>
</li>
</ul>
<p>:::ad</p>
<h2>Polygon Reduction with Mantis {#mantis-basics}</h2>
<p><img src="/4b45f136e3deef181d7ec2a26f8799c9/20241216-5-%E3%83%9D%E3%83%AA%E3%82%B4%E3%83%B3%E5%AF%BE%E8%B1%A1.png" alt="VRChat SDK Validation screen identifying parts with too many polygons"></p>
<p>Start by identifying which parts of your avatar are the biggest polygon offenders.</p>
<ol>
<li>
<p>Open the VRChat SDK Control Panel in Unity and select the "Builder" tab.</p>
</li>
<li>
<p>With your avatar selected, attempt to build -- performance warnings (Validation Results) will appear. Click the "<strong>Select</strong>" button next to the "Polygons" warning message to highlight the meshes (clothing parts, etc.) with the highest polygon counts in the Hierarchy window.</p>
</li>
<li>
<p>Add the "<strong>NDMF Mantis LOD Editor</strong>" component to the highlighted objects (clothing parts, etc.). In the Inspector window, click "Add Component" and search for "Mantis."</p>
</li>
<li>
<p>Adjust the "<strong>Quality</strong>" slider on the added component. Lowering this value reduces the polygon count, but going too far will cause the mesh shape to degrade.</p>
</li>
</ol>
<p><img src="/3dfcb689a443b88d00b4267fedd9325b/20241216-6-MantisLOD-%E3%82%B9%E3%83%A9%E3%82%A4%E3%83%80%E3%83%BC.webp" alt="NDMF Mantis LOD Editor component Quality slider"></p>
<p><strong>[Tip] Use Shaded Wireframe view</strong></p>
<p>When adjusting the Quality slider, switch Unity's Scene view to "<strong>Shaded Wireframe</strong>" mode. This makes it easier to visually assess how much the polygons are being reduced and whether the mesh is degrading. Toggle between "Shaded" and "Shaded Wireframe" views to find the sweet spot that preserves visual quality.</p>
<p><img src="/97c3930af34380a375ab4fba109852cf/20241216-7-ShadedWireframe.png" alt="Checking polygons in Shaded Wireframe view"></p>
<p>The basic workflow is: identify high-polygon parts, add the component, and adjust Quality -- repeating this for each major part.</p>
<h3>[Important] Delete Unused Parts Completely</h3>
<p>When customizing avatars, it's tempting to keep old outfits or hairstyles "just in case" by simply hiding them in the Hierarchy. However, <strong>hidden meshes may still count toward the polygon total!</strong></p>
<p>In my case, I had the original Manuka avatar's apron parts hidden but still present. Completely deleting them from the Hierarchy saved <strong>approximately 16,000 polygons</strong>.</p>
<p>Don't hesitate to delete parts you're not using. If you need them later, you can always reimport from the original avatar or outfit unitypackage.</p>
<p><img src="/fe49145f606f644bed17c449179cf053/20241216-8-%E4%B8%8D%E8%A6%81%E3%83%91%E3%83%BC%E3%83%84%E3%81%AE%E5%89%8A%E9%99%A4.png" alt="Deleting unnecessary parts from the Hierarchy"></p>
<p>(Reference: <a href="https://note.com/kohadachan/n/n68e1f8f15606">How to escape VeryPoor with just Unity | Kohada</a>)</p>
<h3>[Caution] Be Very Careful with Face and Body Polygon Reduction!</h3>
<p><img src="/9e3fc02e22c9124162041785f1c775d7/20241216-9-%E9%A1%94%E3%83%91%E3%83%BC%E3%83%84%E3%81%AF%E3%83%8E%E3%83%BC%E3%82%BF%E3%83%83%E3%83%81.png" alt="The detailed mesh structure around the face"></p>
<p>While Mantis is effective for polygon reduction, <strong>it's generally best to leave the "face" and "body" meshes untouched</strong>.</p>
<p>The face area in particular is composed of extremely fine polygons to support rich facial expressions. The mouth area needs various polygon configurations for lip sync shapes, and the eye area needs them for blinking and emotion expressions.</p>
<p>Carelessly reducing these areas with Mantis has a <strong>high risk of causing the mouth to break during speech or facial expressions to collapse entirely</strong>. Similarly, reducing body joint areas can result in unnatural appearances during poses.</p>
<p>Focus polygon reduction on <strong>clothing, hair, and accessories</strong>, keeping the face and body as close to their original state as possible.</p>
<h3>[Decision] Sometimes Design Compromises Are Necessary</h3>
<p><img src="/7bdb2a6a77a323041479748e02564585/20241216-10-%E5%B0%BB%E5%B0%BE%E6%9C%80%E9%81%A9%E5%8C%96.png" alt="Tail part before and after reduction"></p>
<p>When you simply can't reach your target polygon count (70,000 for Poor rank), you may need to make the tough call of sacrificing part of the design.</p>
<p>For my avatar, I reduced the tail part's polygon count with Mantis to some degree, but it still wasn't enough to hit the target. Ultimately, I decided to <strong>remove the tail part entirely</strong>. It was disappointing, but sometimes these trade-offs are necessary for performance rank improvement.</p>
<p>:::ad</p>
<h2>Cases Where Mantis Alone Isn't Enough {#mantis-limitations}</h2>
<p>Mantis LOD Editor is an excellent tool, but during the optimization process, several situations arose where Mantis alone wasn't sufficient or efficient:</p>
<ul>
<li><strong>Reduction limits on combined parts:</strong> When a single object contains multiple elements (e.g., inner and outer clothing fused together), reducing with Mantis causes the lower-polygon section (e.g., the inner layer) to degrade first, preventing adequate reduction of the higher-polygon section (e.g., the outer coat). Ideally, you'd separate the parts and reduce each individually.</li>
<li><strong>Remaining hidden mesh:</strong> Body mesh hidden under clothing (e.g., the stomach area) still counts toward polygon totals even when hidden. You want to delete just those concealed areas.</li>
<li><strong>Removing specific decorations:</strong> You want to remove just certain clothing details (pockets, belts, frills, etc.) to save polygons, but Mantis only offers global reduction.</li>
</ul>
<p>This is where "<strong>MeshDeleterWithTexture beta</strong>" becomes invaluable.</p>
<p><img src="/5d6ae22b8211e9636242fe78b9e32320/20241216-11-MeshDeleter-%E4%BD%9C%E6%A5%AD%E5%89%8D.png" alt="Body mesh before optimization (showing areas hidden by clothing)"></p>
<p>(For example, we want to delete this stomach mesh that's hidden under clothing.)</p>
<p><img src="/f7d88f2640997ff1bc7f0e4655126501/20241216-12-MeshDeleter-%E8%85%B9%E9%83%A8%E5%89%8A%E9%99%A4%E5%89%8D.png" alt="Clothing hidden, showing remaining stomach mesh"></p>
<h2>Partial Deletion with MeshDeleter {#meshdeleter}</h2>
<p>"<strong>MeshDeleterWithTexture beta</strong>" is a revolutionary tool that <strong>deletes mesh regions corresponding to areas you paint on a texture image</strong> (technically, it generates a new mesh with those areas removed).</p>
<p>The workflow is highly intuitive:</p>
<ol>
<li>
<p>Download the unitypackage from Gatosyocora's <a href="https://booth.pm/ja/items/1501527">Booth page</a> and import it into your project.</p>
</li>
<li>
<p>A "GotoTools" item appears in Unity's menu bar. Select "MeshDeleter with Texture" to open the tool window.</p>
</li>
</ol>
<p><img src="/c91687d2699cbfe3039d8f9efd1788d2/20241216-13-MeshDeleter%E3%83%A1%E3%83%8B%E3%83%A5%E3%83%BC.png" alt="MeshDeleter with Texture menu"></p>
<ol start="3">
<li>
<p>Drag and drop the object whose mesh you want to delete (e.g., the body mesh object) from the Hierarchy window into the "Renderer" field at the top of the window.</p>
</li>
<li>
<p>The texture image assigned to the object appears in the window.</p>
</li>
<li>
<p>Select "PEN" or another draw type from the right side, then <strong>paint the areas you want to delete (e.g., the stomach area hidden by clothing) in black on the texture</strong>. The painted areas are reflected in real-time on the model in the Scene view, letting you preview what will be removed.</p>
</li>
</ol>
<p><img src="/ecac34eec7ca8be6d8cd7afea555889b/20241216-14-MeshDeleter.png" alt="Painting the stomach area black on the texture in MeshDeleter"></p>
<ol start="6">
<li>After confirming the deletion area, click the "<strong>DeleteMesh</strong>" button. A new mesh with the specified areas removed is generated and automatically applied to the object.</li>
</ol>
<p><img src="/f24a68730f3db3fe817fc751e8003ffc/20241216-15-MeshDeleter-%E5%89%8A%E9%99%A4%E5%AE%8C%E4%BA%86.png" alt="After MeshDeleter execution -- stomach mesh has been removed"></p>
<p><strong>[Note] MeshDeleter is also non-destructive!</strong></p>
<p>This tool is also non-destructive -- the original mesh data remains in your project. If you made a mistake or want to revert, simply change the Mesh reference in the object's Mesh Renderer (or Skinned Mesh Renderer) component back to the original file.</p>
<p><img src="/f32170709c0d0b07995f8f5f35754761/20241216-16-MeshDeleter%E6%96%B0%E8%A6%8F%E3%83%A1%E3%83%83%E3%82%B7%E3%83%A5%E9%81%A9%E7%94%A8.png" alt="Inspector showing the newly generated mesh applied"></p>
<p>Using this method to remove the stomach mesh hidden under clothing saved <strong>approximately 1,000 polygons</strong>. You can then apply Mantis LOD Editor to the remaining areas for even more efficient polygon reduction.</p>
<p>:::ad</p>
<h2>Advanced: Separating Clothing Parts {#advanced}</h2>
<p><img src="/8bc0b47e1d2bd8cae35cb665adfa2fb6/20241216-17-Techware.png" alt="The target techware outfit (inner and outer layers are a single mesh)"></p>
<p>MeshDeleter's "delete mesh by painting on a texture" functionality can also be used to <strong>separate fused clothing parts</strong>.</p>
<p>The techware outfit I was working with (~60,000 polygons) had the coat (outer layer) and upper-body inner layer built as a single object (mesh). With Mantis, trying to reduce this combined mesh would degrade the lower-polygon inner layer first, preventing sufficient reduction of the higher-polygon coat section.</p>
<p>So I used MeshDeleter to create separate "inner-only" and "outer-only" meshes.</p>
<p><strong>[Part Separation Steps]</strong></p>
<ol>
<li>
<p>Duplicate the original techware object in the Hierarchy (Ctrl+D or Cmd+D) and rename each copy descriptively (e.g., "Inner" and "Outer").</p>
</li>
<li>
<p>Select the "Inner" object and open the MeshDeleter window.</p>
</li>
<li>
<p>On the texture, <strong>paint all areas corresponding to the outer coat in black</strong> and execute "DeleteMesh." This produces a mesh containing only the inner layer.</p>
</li>
</ol>
<p><img src="/f420d73579e2136640b57f5731b8cc2a/20241216-19-%E3%82%A4%E3%83%B3%E3%83%8A%E3%83%BC%E3%82%AA%E3%83%96%E3%82%B8%E3%82%A7%E3%82%AF%E3%83%88.png" alt="Painting the coat areas on the Inner object"></p>
<p><img src="/088d05f551a8baac55d771021965dc0b/20241216-18-%E3%82%A4%E3%83%B3%E3%83%8A%E3%83%BC%E7%94%A8.png" alt="The completed inner-only mesh"></p>
<ol start="4">
<li>Similarly, select the "Outer" object, and in the MeshDeleter window <strong>paint all areas corresponding to the inner layer in black</strong>, then execute "DeleteMesh." This produces a mesh containing only the outer coat.</li>
</ol>
<p><img src="/343b351f01ea87174a939a722592b768/20241216-20-%E3%82%A2%E3%82%A6%E3%82%BF%E3%83%BC%E3%82%AA%E3%83%96%E3%82%B8%E3%82%A7%E3%82%AF%E3%83%88.png" alt="Painting the inner areas on the Outer object"></p>
<p>This separates what was originally a single piece of clothing into two distinct meshes: "Inner" and "Outer." The fact that you can <strong>separate parts entirely within Unity</strong> without needing external modeling software like Blender is incredibly convenient.</p>
<p>After separation, you can apply NDMF Mantis LOD Editor individually to each part (inner, outer) for much more effective polygon reduction.</p>
<h2>Optimization Results and Key Takeaways {#summary}</h2>
<p>Using the methods described above, the avatar that started at 180,000 polygons with a VeryPoor rank was successfully reduced to 69,995 polygons with a Poor rank.</p>
<p><img src="/d6454183b26357a114e6c94e21e45caa/20241216-21-%E6%9C%80%E7%B5%82%E7%B5%90%E6%9E%9C.png" alt="Final polygon count (69,995) showing Poor rank"></p>
<p>To reach the target of under 70,000 polygons, I also made design adjustments like removing clothing decorations and reducing coat polygons. The process of reducing polygon count while preserving visual quality requires trial and error, but it was an excellent learning experience.</p>
<p><img src="/30d8a5fff84b009bb73259b9e48801a6/20241216-22-%E6%9C%80%E7%B5%82%E7%B5%90%E6%9E%9C.png" alt="Avatar appearance after optimization (visual degradation is minimal)"></p>
<p>Both "Mantis LOD Editor (NDMF version)" and "MeshDeleterWithTexture beta" are non-destructive tools, making them relatively safe for beginners to try. The reassurance that "you can always revert" is a significant psychological benefit when undertaking optimization work.</p>
<p>By combining Mantis's powerful polygon reduction capabilities with MeshDeleter's flexible partial editing and creative applications, even VeryPoor-ranked avatars have a strong chance of being effectively optimized.</p>
<p>If you're struggling with polygon counts on your VRChat avatar, I encourage you to try the tools and methods introduced in this article. (Note that performance rank is also affected by factors like material count, so consider those as well if aiming for comprehensive optimization.)</p>
<h2>References {#references}</h2>
<p>The following articles and videos were invaluable references during this avatar optimization (polygon reduction) work. Thank you for the excellent information!</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=VIDEO_ID">Episode 5: lil NDMF Mesh Simplifier VS Mantis LOD Editor - Polygon Reduction Comparison &#x26; Guide (VeryPoor to Medium) - INST Channel YouTube</a> (<em>The original link pointed to a YouTube search -- this is a placeholder. Please search for the actual video.</em>)</li>
<li><a href="https://metacul-frontier.com/?p=9367">Non-Destructive Polygon Reduction! Introducing the NDMF Integration Tool for Mantis LOD Editor | Metacul Frontier</a></li>
<li><a href="https://metacul-frontier.com/?p=7292#toc_id9">November 2024 Update: Must-Have VCC Tools for Avatar Modification | Metacul Frontier</a></li>
<li><a href="https://note.com/suhamahzk/n/nc8ede0789a23">Getting Your Avatar Under 20K Polygons with Mantis LOD Editor (VRChat) | Suzuha</a></li>
<li><a href="https://note.com/kohadachan/n/n68e1f8f15606">How to Escape VeryPoor with Just Unity | Kohada</a></li>
</ul>
<p>:::ad</p>
<hr>
<h2>Addendum: Considerations for High-End Models {#appendix}</h2>
<p>While this article covers improving a Very Poor avatar to Poor rank, it's worth noting that the same approach isn't always the best option for every case. Optimizing so-called "high-end models" with very high initial polygon counts requires particular caution.</p>
<p>These models are appealing precisely because of their intricate decorations, complex outfits, and numerous gimmicks -- but this comes with enormous polygon counts (commonly exceeding 100,000 by a large margin). Applying polygon reduction tools like Mantis LOD Editor to these models can strip away visual detail even at modest reduction levels, significantly diminishing the model's original appeal.</p>
<p>For avatars that are only slightly above the Poor rank threshold (e.g., 70K-100K polygons) or models with inherently simpler designs, the methods in this article work well. However, for high-end models that exceed twice the threshold -- 150K, 200K+ polygons -- trying to forcibly reduce them to Poor rank while maintaining visual quality is often impractical and not recommended.</p>
<p>So what should you do instead? Rather than extreme optimization of a single model, consider a usage-based approach:</p>
<ul>
<li><strong>Regular use:</strong> Keep your main avatar at its original quality without optimization</li>
<li><strong>Performance-critical situations:</strong> For large gatherings or worlds where performance matters, prepare a separate lightweight version (Quest-compatible versions may already be available) or use an entirely different lightweight avatar</li>
</ul>
<h3>Personal Experience and Reflection</h3>
<p>To be honest, after using the Manuka modification I optimized to Poor rank (69,995 polygons) for a while, my perspective changed somewhat. While the polygon reduction was technically successful, I found myself thinking "Was it really worth removing those clothing details for this?" about some of the design compromises I'd made.</p>
<p>Also, given that my VRChat play style doesn't frequently involve performance-critical worlds, the honest truth is that "keeping this particular model at Very Poor probably wouldn't have caused any real problems."</p>
<p>Avatar optimization is an important aspect of enjoying VRChat comfortably, but it's not always a necessity. I believe it's important to carefully consider your play style, the communities you participate in, and how much visual change you're willing to accept before deciding whether and how far to optimize.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/07e8f6201d7b8c8e4a6258f032d8b995/20241216-VRChat-MantisLOD_thumb.jpg" medium="image"/></item><item><title><![CDATA[How to Add a 'Popular Posts' Section to Your Static Blog with Google Analytics Data API (Next.js / Gatsby.js)]]></title><description><![CDATA[By fetching popular post data as JSON via the Google Analytics Data API, you can implement a popular posts section on static blog sites built with Next.js or Gatsby.js.]]></description><link>https://uhiyama-lab.com/en/blog/webdev/static-popular-posts/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/webdev/static-popular-posts/</guid><category><![CDATA[gatsby]]></category><category><![CDATA[nextjs]]></category><pubDate>Sun, 15 Dec 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>The <strong>JAMstack</strong> architecture (e.g., <strong>Next.js</strong> or <strong>Gatsby.js</strong> + a headless CMS) decouples the frontend from the backend, offering great flexibility in technology choices along with significant benefits in performance and security. It's a modern approach to building websites that continues to grow in popularity.</p>
<p>However, features that were trivially implemented with plugins on traditional WordPress sites -- such as a "<strong>Popular Posts</strong>" ranking -- require a bit more creativity in this architecture.</p>
<p>WordPress makes popular posts easy because article data and page view counts are stored in the same database, allowing dynamic data retrieval and aggregation for ranking display. Most JAMstack setups, on the other hand, generate static HTML files at build time, making it difficult to dynamically display rankings based on real-time view counts. Additionally, with the frontend and backend separated, there's no built-in mechanism for recording and referencing access counts.</p>
<p>This is where <strong>Google Analytics</strong> comes in -- the widely-adopted analytics tool already installed on most websites. By fetching per-page view count data through its API (<strong>Google Analytics Data API / GA4 Data API</strong>) and integrating it into the build process, you can display a "Popular Posts" section even on a fully static site.</p>
<p>This article walks you through the specific steps and concepts for using the <strong>Google Analytics Data API (GA4)</strong> to fetch popular post data and implement a "<strong>Popular Posts</strong>" section in a static site generator environment like <strong>Next.js</strong> or <strong>Gatsby.js</strong>.</p>
<p>:::ad</p>
<hr>
<p><strong>In this article</strong></p>
<ol>
<li>The challenge of implementing popular posts on static sites (headless CMS)</li>
<li>Enabling and setting up the Google Analytics Data API (GA4)</li>
<li>Using service accounts and security considerations</li>
<li>Implementation example: Node.js script for fetching popular post data</li>
<li>Integrating JSON data into your static site at build time</li>
<li>Summary: API integration makes popular posts possible on static sites!</li>
</ol>
<hr>
<h2>The Challenge of Implementing Popular Posts on Static Sites (Headless CMS)</h2>
<p><img src="/2e8bca8b3ad8539c7f515b77622be091/001-JAMstack-GoogleAnalytics.png" alt="Conceptual diagram of JAMstack architecture with Google Analytics integration"></p>
<p>As mentioned, traditional dynamic CMS platforms (like WordPress) typically aggregate page views in real-time through database access to display popular post rankings.</p>
<p>However, in a <strong>JAMstack</strong> <strong>static site</strong> setup, content is generated at build time, and there's fundamentally no dynamic server-side data processing. This makes it impractical to aggregate view counts in real-time and update rankings on every page load.</p>
<p>This is where leveraging <strong>Google Analytics</strong> as an external data source becomes effective. Here's the basic flow:</p>
<ol>
<li><strong>Fetch data before building:</strong> Run a script that retrieves per-page view counts for a specified period through the <strong>Google Analytics Data API</strong>. (e.g., top 10 posts by page views over the last 30 days)</li>
<li><strong>Format and save the data:</strong> Transform the retrieved ranking data into a convenient format (e.g., a <strong>JSON file</strong>) and save it within the project.</li>
<li><strong>Reference the data at build time:</strong> When the static site generator (Next.js or Gatsby.js) builds the site, it reads the saved JSON file.</li>
<li><strong>Embed in static pages:</strong> Use the ranking data (article titles, URLs, page views, etc.) to generate a "Popular Posts" component or list and embed it in the static HTML pages.</li>
</ol>
<p>With this approach, <strong>the latest ranking information is reflected every time the site is deployed (built)</strong>, while the site itself is served as fast static files -- no performance compromise.</p>
<hr>
<p>:::ad</p>
<h2>Enabling and Setting Up the Google Analytics Data API (GA4)</h2>
<p>First, let's enable the API and prepare the necessary credentials to programmatically access Google Analytics data.</p>
<p><img src="/9f99caf966076427147763643005c4a5/002-GoogleAnalyticsDataApi.png" alt="Enabling the Google Analytics Data API on Google Cloud Platform"></p>
<ol>
<li>
<p><strong>Enable the API on Google Cloud Platform (GCP):</strong></p>
<ul>
<li>Go to the Google Cloud Console (<a href="https://console.cloud.google.com/">https://console.cloud.google.com/</a>) and select or create a project.</li>
<li>Navigate to "APIs &#x26; Services" > "Library," search for "<strong>Google Analytics Data API</strong>," and enable it.</li>
</ul>
</li>
<li>
<p><strong>Create a service account and download the key:</strong></p>
<ul>
<li>In GCP, go to "APIs &#x26; Services" > "Credentials," select "Create Credentials" > "Service Account," and create a new service account. (The name can be anything; a role is often not required.)</li>
<li>Select the created service account, go to "Keys" tab > "Add Key" > "Create New Key," and create a <strong>JSON</strong> format key. Download it. <strong>This JSON file will be used by your script. Keep it safe.</strong></li>
</ul>
</li>
<li>
<p><strong>Grant the service account access to your GA4 property:</strong></p>
<ul>
<li>Go to Google Analytics (<a href="https://analytics.google.com/">https://analytics.google.com/</a>) and open "Admin" (gear icon) for your target GA4 property.</li>
<li>Navigate to "Property Settings" > "Property Access Management," click the "+" button, and select "Add Users."</li>
<li>Enter the service account's email address (visible on the GCP credentials page) and grant at least "<strong>Viewer</strong>" permissions.</li>
</ul>
</li>
<li>
<p><strong>Note your GA4 Property ID:</strong></p>
<ul>
<li>
<p>In Google Analytics under "Admin" > "Property Settings," find the "<strong>Property ID</strong>" (a numeric-only ID) and save it. This will also be used in your script.</p>
<p><img src="/4e1a18b07e990b5e6cdca55e2b6c61cb/004-AnalyticsPropertyID.png" alt="GA4 Property ID confirmation screen"></p>
</li>
</ul>
</li>
</ol>
<p>For detailed guidance on these steps (especially GCP and GA4 operations), refer to Google's official documentation and other tutorial resources as appropriate for your setup.</p>
<hr>
<p>:::ad</p>
<h2>Using Service Accounts and Security Considerations</h2>
<p><img src="/323d845aa6d5c9756b1aaa3e6277c5b0/003-GoogleServiceAccount.png" alt="Service account key management on Google Cloud Platform"></p>
<p>When programmatically accessing Google services like the Google Analytics Data API, you typically use a "<strong>service account</strong>" -- a special account type. Unlike a personal Google account, it's designed for applications and scripts to authenticate themselves.</p>
<p>The <strong>service account key (JSON file)</strong> you created and downloaded earlier contains secret credentials for API access as that service account. Therefore, <strong>handle this key file with extreme care</strong>.</p>
<ul>
<li><strong>Never expose it publicly:</strong> <strong>Never commit the JSON key file to a Git repository (especially public repositories).</strong> Accidental exposure creates a risk of unauthorized use. Add the key filename to your <code>.gitignore</code> file to exclude it from Git tracking.</li>
<li><strong>Store it securely:</strong> During local development, you might keep it in the project root, but for production environments (deployment targets), the standard practice is to pass credentials securely via environment variables or secret management features. (See the script example below.)</li>
<li><strong>Minimize permissions:</strong> Grant the service account only the minimum required "Viewer" permission on the GA4 property for data retrieval.</li>
</ul>
<hr>
<p>:::ad</p>
<h2>Implementation Example: Node.js Script for Fetching Popular Post Data</h2>
<p>Here's a Node.js script example that uses the prepared service account key and GA4 Property ID to fetch popular post data and save it as a JSON file.</p>
<p><strong>[Setup]</strong></p>
<ol>
<li>Create a <code>scripts</code> folder in your project root directory and save the following script as <code>fetch-popular-posts.js</code> (or any name you prefer).</li>
<li>Place the GCP service account key JSON file in your project root directory as <code>service-account.json</code> (or any name).</li>
<li>Create a <code>.env</code> file in your project root directory with your GA4 Property ID in the following format (replace <code>?????????</code> with your actual ID):</li>
</ol>
<pre><code>GA4_PROPERTY_ID=?????????
</code></pre>
<ol start="4">
<li><strong>[Important]</strong> Since <code>service-account.json</code> and <code>.env</code> contain sensitive information, add them to your <code>.gitignore</code> to prevent them from being included in the Git repository.</li>
</ol>
<pre><code># .gitignore example
service-account.json
.env
</code></pre>
<ol start="5">
<li>Install the required npm packages by running the following command in your terminal:</li>
</ol>
<pre><code class="language-bash">npm install @google-analytics/data dotenv
</code></pre>
<pre><code>(Or `yarn add @google-analytics/data dotenv`)
</code></pre>
<p><strong>[Script Example: <code>scripts/fetch-popular-posts.js</code>]</strong></p>
<pre><code class="language-javascript">// Load environment variables from .env file
require("dotenv").config();
// Google Analytics Data API client library
const { BetaAnalyticsDataClient } = require("@google-analytics/data");
// Node.js file system and path modules
const fs = require("fs");
const path = require("path");

// Define the popular posts fetching logic as an async function
async function fetchPopularPosts() {
  let credentials;

  // --- Credential Setup ---
  // Prefer GA_CREDENTIALS_JSON env variable for production (recommended)
  if (process.env.GA_CREDENTIALS_JSON) {
    try {
      credentials = JSON.parse(process.env.GA_CREDENTIALS_JSON);
    } catch (e) {
      console.error("Failed to parse GA_CREDENTIALS_JSON environment variable.", e);
      process.exit(1);
    }
  }
  // Fall back to local service-account.json file
  else {
    const keyFilePath = path.resolve(__dirname, "../service-account.json");
    if (fs.existsSync(keyFilePath)) {
      credentials = JSON.parse(fs.readFileSync(keyFilePath, "utf8"));
    } else {
      console.error(`Service account key file not found at ${keyFilePath}. Or set GA_CREDENTIALS_JSON env var.`);
      process.exit(1);
    }
  }

  // --- Initialize GA4 Data API Client ---
  const analyticsDataClient = new BetaAnalyticsDataClient({
    credentials: {
      client_email: credentials.client_email,
      private_key: credentials.private_key,
    },
  });

  // --- Get GA4 Property ID ---
  const propertyId = process.env.GA4_PROPERTY_ID;
  if (!propertyId) {
    throw new Error("GA4_PROPERTY_ID is not set in environment variables.");
  }

  // --- Execute API Request ---
  try {
    const [response] = await analyticsDataClient.runReport({
      // Specify the property ID
      property: `properties/${propertyId}`,
      // Data range (last 30 days)
      dateRanges: [{ startDate: "30daysAgo", endDate: "today" }],
      // Dimensions to retrieve (page path, page title)
      dimensions: [{ name: "pagePath" }, { name: "pageTitle" }],
      // Metrics to retrieve (page views)
      metrics: [{ name: "screenPageViews" }], // GA4 uses "screenPageViews" instead of "ga:pageviews"
      // Sort order (descending by page views = highest first)
      orderBys: [{ metric: { metricName: "screenPageViews" }, desc: true }],
      // Number of results (top 10)
      limit: 10,
      // Dimension filter (only pages starting with '/blog/')
      dimensionFilter: {
        filter: {
          fieldName: "pagePath",
          stringFilter: {
            matchType: "PARTIAL_REGEXP", // Partial regex match
            value: "^/blog/", // Paths starting with /blog/
          },
        },
      },
    });

    // --- Format Results ---
    const popularPosts = response.rows.map((row) => ({
      pagePath: row.dimensionValues[0].value,
      pageTitle: row.dimensionValues[1].value,
      pageViews: parseInt(row.metricValues[0].value, 10), // Convert view count to integer
    }));

    // --- Write to JSON File ---
    // Create data directory in project root if it doesn't exist
    const dataDir = path.resolve(__dirname, "../data");
    if (!fs.existsSync(dataDir)) {
      fs.mkdirSync(dataDir);
    }
    // Save as popular-posts.json in the data directory
    fs.writeFileSync(
      path.join(dataDir, "popular-posts.json"),
      JSON.stringify(popularPosts, null, 2) // Pretty-print for readability
    );
    console.log("Successfully fetched popular posts and saved to data/popular-posts.json");

  } catch (error) {
      console.error("Error fetching Google Analytics data:", error);
      process.exit(1); // Exit on error
  }
}

// Execute the function
fetchPopularPosts();
</code></pre>
<p><strong>Key points about the script:</strong></p>
<ul>
<li><strong>Authentication:</strong> The script prioritizes the <code>GA_CREDENTIALS_JSON</code> environment variable, falling back to the local <code>service-account.json</code> file. This allows secure credential handling across both local development and production environments (like Cloudflare Pages). In production, it's common practice to set the entire JSON key content as an environment variable.</li>
<li><strong>API Request:</strong> The <code>runReport</code> method sends a request to GA4.
<ul>
<li><code>dateRanges</code>: Specifies the data retrieval period (e.g., "30daysAgo" to "today").</li>
<li><code>dimensions</code>: Specifies the types of information to retrieve (page path, title, etc.).</li>
<li><code>metrics</code>: Specifies the metrics to aggregate (page views via <code>screenPageViews</code>, etc.).</li>
<li><code>orderBys</code>: Specifies the sort order (e.g., descending by page views).</li>
<li><code>limit</code>: Specifies the maximum number of results to retrieve.</li>
<li><code>dimensionFilter</code>: <strong>[Important]</strong> Filters which pages to include. In this example, only pages whose path (<code>pagePath</code>) matches the regex <code>^/blog/</code> are retrieved. This prevents non-blog pages (like the homepage <code>/</code>) from appearing in the ranking. <strong>Make sure to adjust this <code>value</code> to match your blog's URL structure.</strong></li>
</ul>
</li>
<li><strong>Formatting and saving:</strong> The API response is converted to a convenient JSON format (an array of objects with path, title, and view count) and written to <code>data/popular-posts.json</code>.</li>
</ul>
<p>Running this script in your terminal with <code>node scripts/fetch-popular-posts.js</code> will save the popular post data as a JSON file in the <code>data</code> folder (created if it doesn't exist).</p>
<hr>
<p>:::ad</p>
<h2>Integrating JSON Data into Your Static Site at Build Time</h2>
<p>Once the JSON file with popular post data is ready, all that's left is to have your static site generator (Next.js or Gatsby.js) read this JSON file during the build process and pass the data to pages or components for display.</p>
<p>The critical requirement here is ensuring that <strong>the JSON file is updated with the latest data every time the site is built (deployed)</strong>. Otherwise, the ranking will become stale.</p>
<p>To achieve this, you typically edit the <code>scripts</code> section of your project's <code>package.json</code> file to run the data-fetching script before the build command executes.</p>
<p><strong>[<code>package.json</code> configuration example (for Gatsby.js)]</strong></p>
<pre><code class="language-json">{
  "scripts": {
    // Define the command that runs the data-fetching script
    "fetch-data": "node scripts/fetch-popular-posts.js",
    // Optionally fetch data when starting the dev server
    "develop": "npm run fetch-data &#x26;&#x26; gatsby develop",
    // Ensure data is fetched before production builds
    "build": "npm run fetch-data &#x26;&#x26; gatsby build",
    // Start command (for dev server startup, etc.)
    "start": "npm run develop"
    // Plus test, serve, etc.
  }
}
</code></pre>
<p>In this example:</p>
<ol>
<li>A command called <code>fetch-data</code> is defined to run the data-fetching script.</li>
<li>The <code>build</code> command (for production builds) first runs <code>npm run fetch-data</code>, then executes <code>gatsby build</code>. (<code>&#x26;&#x26;</code> chains commands sequentially.)</li>
<li>Similarly, the development server startup commands (<code>develop</code>, <code>start</code>) also fetch data (useful for working with current data during development). For Next.js, add <code>npm run fetch-data &#x26;&#x26;</code> before <code>next dev</code> or <code>next build</code>.</li>
</ol>
<p>With this configuration, when you run <code>npm run build</code> (or when automatic builds are triggered on hosting services like Vercel or Cloudflare Pages):</p>
<ol>
<li><code>fetch-popular-posts.js</code> executes first, saving the latest popular post data to <code>data/popular-posts.json</code>.</li>
<li>Then the Gatsby (or Next.js) build process starts, reading <code>data/popular-posts.json</code> and generating static HTML pages that include the popular posts section.</li>
</ol>
<p>This <strong>build-time data fetching</strong> approach ensures that site visitors always see the most up-to-date popular posts ranking (as of build time) served as fast static pages, without concerns about server load or runtime API calls. This is a major advantage of the JAMstack architecture from both performance and security perspectives.</p>
<p>From here, follow your framework's conventions to read the JSON data at build time and pass it to React components for display. (For example, in Gatsby you might read the JSON in <code>gatsby-node.js</code> and add it to the GraphQL data layer; in Next.js you could use <code>fs.readFile</code> within <code>getStaticProps</code>.)</p>
<hr>
<p>:::ad</p>
<h2>Summary: API Integration Makes Popular Posts Possible on Static Sites!</h2>
<p>In <strong>static site</strong> environments built with <strong>JAMstack</strong> or <strong>headless CMS</strong> architectures, it's not straightforward to display a "Popular Posts" section in real-time like you would with a traditional dynamic CMS such as WordPress.</p>
<p>However, as this article has demonstrated, by leveraging the <strong>Google Analytics Data API (GA4)</strong> to fetch and format access data <strong>before the site build process</strong>, saving it as a JSON file, and embedding it into static pages at build time, you can maintain all the advantages of a static site (fast loading, high security, scalability) while still <strong>displaying an up-to-date popular posts ranking</strong>.</p>
<p>This approach isn't tightly coupled to any specific CMS or frontend framework, making it applicable to <strong>Next.js</strong>, <strong>Gatsby.js</strong>, and various other JAMstack configurations. While there are some considerations -- such as secure management of service account keys and proper API request filter configuration -- once the system is set up, it automatically updates your ranking on every build.</p>
<p>I encourage you to adapt this approach to your own static blog or website architecture and implement a "Popular Posts" section of your own.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/6ee80f0db975ccec9e791d933a7d498a/20241215-Google-Analytics-Data-API%E3%81%A7%E9%9D%99%E7%9A%84%E3%83%96%E3%83%AD%E3%82%B0%E3%82%B5%E3%82%A4%E3%83%88%E3%81%AB%E3%80%8E%E4%BA%BA%E6%B0%97%E8%A8%98%E4%BA%8B%E4%B8%80%E8%A6%A7%E3%80%8F%E3%82%92%E5%AE%9F%E8%A3%85%E3%81%99%E3%82%8B.jpg" medium="image"/></item><item><title><![CDATA[Fix See-Through Clothing in VRChat: How to Use MeshFlipper]]></title><description><![CDATA[How to fix see-through coat and cape linings in VRChat using MeshFlipper]]></description><link>https://uhiyama-lab.com/en/blog/dialy/vrchat-meshflipper/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/dialy/vrchat-meshflipper/</guid><category><![CDATA[vrchat]]></category><pubDate>Sat, 14 Dec 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Have you ever been told by a friend in VRChat that "the lining of your coat is transparent and you can see through it" when using an avatar with a long coat, cape, or similar garment?</p>
<p>From your own perspective everything looks fine, but from other people's viewpoint, the back side of the coat isn't rendered and the body shows through... This is a relatively common issue encountered when using avatars in VRChat.</p>
<p>This "see-through clothing lining" problem can actually be fixed fairly easily with a simple Unity tool. In this article, I'll explain the cause and walk you through the solution using a tool called "MeshFlipper."</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#cause">Why Clothing Linings Appear See-Through</a></li>
<li><a href="#meshflipper-usage">Installing and Using MeshFlipper</a></li>
<li><a href="#technical-details">How It Works Technically</a></li>
<li><a href="#summary">Summary</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Why Clothing Linings Appear See-Through {#cause}</h2>
<p><img src="/ba051a855353fe211521d4a44ecb2269/02_20241214_VRC-%E6%9C%8D%E3%81%AE%E8%A3%8F%E5%9C%B0.jpg" alt="Example of a coat lining appearing see-through in VRChat"></p>
<p>The main cause of this issue is <strong>single-sided rendering</strong> (also called backface culling), a technique used in most 3D models.</p>
<p>In simple terms, the polygons that make up a 3D model only render their <strong>front face</strong> by default -- the <strong>back face</strong> is not drawn. This saves computational resources. For tight-fitting clothing, this works perfectly fine since only the exterior (front face) is ever visible.</p>
<p>However, for garments like coats, capes, and skirts that flow freely or have long hems, the <strong>back face</strong> can become visible when the character moves or is viewed from certain angles.</p>
<p>With single-sided rendering, this back face is not a render target, so nothing is drawn -- the background or body shows through instead. This is why it looks like "the lining is transparent" or "there's a rendering bug" when viewed from a third-person perspective or through someone else's camera.</p>
<p>To fix this problem, the clothing simply needs to <strong>render its back face as well</strong>. The tool that makes this easy is "MeshFlipper," introduced next.</p>
<p>:::ad</p>
<h2>Installing and Using MeshFlipper {#meshflipper-usage}</h2>
<p>While searching for a solution to the see-through lining problem, I came across "MeshFlipper" -- a Unity editor extension tool -- through the following article. (Thank you for the information!)</p>
<p>:::post-link{url="<a href="https://note.com/moegitsubasa/n/nd73471257f31#fcc0f527-bb59-4f40-9edf-f6ba3ca05c92">https://note.com/moegitsubasa/n/nd73471257f31#fcc0f527-bb59-4f40-9edf-f6ba3ca05c92</a>" text="Android(Quest) Support Manual Advanced!! - Moegitsubasa"}</p>
<p>MeshFlipper is an incredibly convenient tool that processes 3D model mesh data in Unity to make it double-sided, so the back face is also rendered. It's distributed for free on Booth by the developer fum1.</p>
<p>:::post-link{url="<a href="https://booth.pm/ja/items/5645609">https://booth.pm/ja/items/5645609</a>" text="[Free] Mesh Back Face Generator / Mesh Flipper (BOOTH)"}</p>
<p>Installation and basic usage are very straightforward:</p>
<p><img src="/30a151be27cea699863e52e5dca7a102/03_20241214_VRC-%E6%9C%8D%E3%81%AE%E8%A3%8F%E5%9C%B0.jpg" alt="MeshFlipper window and settings options"></p>
<ol>
<li>
<p><strong>Import MeshFlipper into your Unity project</strong>
Download MeshFlipper from the Booth link above. Drag and drop the <code>MeshFlipper.cs</code> script file from the download into your Unity project's <code>Assets</code> folder (for example, into an <code>Editor</code> folder).</p>
</li>
<li>
<p><strong>Select the mesh with the see-through lining</strong>
In Unity's Hierarchy window, select the avatar you want to fix and locate the clothing part (object) with the see-through lining. Find the <code>Skinned Mesh Renderer</code> or <code>Mesh Renderer</code> component attached to that object.</p>
</li>
<li>
<p><strong>Open the MeshFlipper window</strong>
From Unity's menu bar, select <code>fum1</code> > <code>Mesh Flipper</code> to open the dedicated MeshFlipper window.</p>
</li>
<li>
<p><strong>Configure the options (Important: select TwoSides)</strong>
With the target clothing part (the object with the Skinned Mesh Renderer, etc.) selected, configure the following options in the MeshFlipper window:</p>
<ul>
<li>TwoSides: <strong>Check this option</strong> -- this is the key feature. It makes the mesh double-sided so the back face is also rendered.</li>
<li><code>Flip</code>: This reverses the direction of polygon normals. For fixing see-through linings, <code>TwoSides</code> alone is usually sufficient.</li>
</ul>
<p>In most cases, simply checking <code>TwoSides</code> is enough to solve the see-through lining problem.</p>
</li>
<li>
<p><strong>Click "Create Mesh" to apply</strong>
After configuring the options, click the "Create Mesh" button. MeshFlipper will process the selected mesh to make it double-sided, generate new mesh data, and automatically replace the original mesh. (The original mesh data may be kept as a backup.)</p>
</li>
</ol>
<p>After processing is complete, try viewing the back side of the garment in Unity's Scene view, or upload to VRChat and have a friend check. If the lining renders properly and is no longer transparent, you're all set!</p>
<h2>How It Works Technically {#technical-details}</h2>
<p>On a slightly more technical level, here's what MeshFlipper does internally:</p>
<ul>
<li><strong>Vertex duplication and normal inversion:</strong> It copies the original mesh's vertex data and inverts the polygon normals (facing direction) on the copy.</li>
<li><strong>Mesh merging:</strong> It combines the original mesh (front face) with the normal-inverted copy (for the back face) into a single mesh.</li>
</ul>
<p>This effectively creates mesh data that has <strong>polygons for both the front face and the back face</strong>. As a result, the surface is rendered regardless of which direction it's viewed from, solving the see-through lining problem.</p>
<p>In essence, MeshFlipper creates a "virtual back side" for clothing that originally didn't have one (wasn't rendered).</p>
<p>Since it handles this complex processing with a single button click, it's extremely handy when working with coats, skirts, and other garments where the back side is likely to be visible.</p>
<p>:::ad</p>
<h2>Summary {#summary}</h2>
<p>If you encounter the issue where your VRChat avatar's clothing (especially coats, capes, skirts, etc.) appears see-through or transparent from the back, consider using the Unity editor extension tool "MeshFlipper."</p>
<p>With a simple process, it makes meshes double-sided, and in most cases, this alone is enough to make the lining render properly.</p>
<p>MeshFlipper is distributed for free on Booth by the developer fum1. If you're a VRChat user or avatar customizer dealing with the same issue, I highly recommend giving it a try!</p>
<p>:::post-link{url="<a href="https://booth.pm/ja/items/5645609">https://booth.pm/ja/items/5645609</a>" text="[Free] Mesh Back Face Generator / Mesh Flipper (BOOTH)"}</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/d3ad67638072a83f635c60294eedd529/Thumb_20241214_VRC-%E6%9C%8D%E3%81%AE%E8%A3%8F%E5%9C%B0.jpg" medium="image"/></item><item><title><![CDATA[Cut Noto Sans JP Font Size by 50% with FontTools Subsetting (Preserving OTF Data)]]></title><description><![CDATA[How to subset Noto Sans JP with FontTools to reduce file size by 50% while preserving OTF layout data, improving page load speed]]></description><link>https://uhiyama-lab.com/en/blog/webdev/optimize-subset-fonttools/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/webdev/optimize-subset-fonttools/</guid><category><![CDATA[font]]></category><pubDate>Thu, 05 Dec 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Is your website loading slowly? One common culprit is <strong>large web font files</strong>. Japanese fonts in particular contain an enormous number of characters, leading to massive file sizes. In this article, I'll walk you through <strong>subsetting</strong> the popular Japanese font "<strong>Noto Sans JP</strong>" -- extracting only the characters you actually need to dramatically reduce file size and <strong>improve page load speed</strong>.</p>
<p>While there are many tools available for this task, I initially tried "Subset Font Maker" but encountered rendering issues in my environment. Ultimately, I used the Python library "<strong>FontTools</strong>" to subset Noto Sans JP into <strong>WOFF2 format</strong> while <strong>preserving all OTF layout data</strong> (kerning, ligatures, and other typographic features), successfully <strong>reducing the file size by over 50%</strong> and improving load times. Here's a detailed record of the steps and key considerations.</p>
<p>:::ad</p>
<p><strong>In this article</strong></p>
<ol>
<li>What is font subsetting, and why does it help page speed?</li>
<li>The pitfall: How Subset Font Maker dropped OTF data and broke the layout</li>
<li>The solution: Step-by-step subsetting with FontTools</li>
<li>Results: The impact of FontTools subsetting on page speed</li>
</ol>
<hr>
<h2>1. What Is Font Subsetting, and Why Does It Help Page Speed?</h2>
<p><strong>Font subsetting</strong> is the process of <strong>extracting only the characters your website actually uses from a font file, removing unnecessary character data to reduce file size</strong>. The reasoning is simple: "Why load every obscure kanji and special symbol when you only use a fraction of them? Keep just the characters you need, and your site will load much faster!"</p>
<p>"<strong>Noto Sans JP</strong>", a widely-used Japanese font also available on Google Fonts, is a high-quality typeface that covers a vast number of characters (hiragana, katakana, common kanji, name kanji, symbols, and more).</p>
<p><a href="https://fonts.google.com/noto/specimen/Noto+Sans+JP">Noto Sans Japanese (Google Fonts)</a></p>
<p>Because of this extensive coverage, when you download and use it as a web font (e.g., in WOFF format), even just the Regular and Bold weights can weigh in at <strong>roughly 4MB each</strong>. That means users have to download a huge amount of data every time they visit a page, creating a significant bottleneck for display speed.</p>
<p><img src="/43e98bd6513397d536aa6416e509273c/01_%E9%9D%9E%E3%82%B5%E3%83%96%E3%82%BB%E3%83%83%E3%83%88%E7%8A%B6%E6%85%8B%E3%81%AE%E3%83%95%E3%82%A9%E3%83%B3%E3%83%88%E3%82%B5%E3%82%A4%E3%82%BA.png" alt="Noto Sans JP font file sizes before subsetting (approximately 4MB each)"></p>
<p>(The font files before subsetting -- notice how large they are.)</p>
<p>This is exactly why subsetting these massive font files is so important.</p>
<hr>
<p>:::ad</p>
<h2>2. The Pitfall: How Subset Font Maker Dropped OTF Data and Broke the Layout</h2>
<p>When researching font subsetting, you'll frequently find recommendations for a GUI tool called "Subset Font Maker." Following several tutorial guides, I used this tool along with "WOFF Converter" to create a subset WOFF font from Noto Sans JP (TTF format).</p>
<p><img src="/3161406842fc07175719871426a3e6dc/02_%E3%82%B5%E3%83%96%E3%82%BB%E3%83%83%E3%83%88%E3%83%95%E3%82%A9%E3%83%B3%E3%83%88%E3%83%A1%E3%83%BC%E3%82%AB%E3%83%BC%E3%81%AB%E3%82%88%E3%82%8B%E5%A4%89%E6%8F%9B.jpg" alt="Subset Font Maker interface"></p>
<p>However, when I applied the generated font to my site, <strong>unexpected layout issues appeared</strong>.</p>
<p><img src="/680a82a264eda5c5afd6600b0cc79928/03_OTF%E6%83%85%E5%A0%B1%E6%AC%A0%E6%90%8D%E3%81%AB%E3%82%88%E3%82%8B%E8%A1%A8%E7%A4%BA%E3%82%BA%E3%83%AC.png" alt="Layout problems after conversion with Subset Font Maker (character spacing is too wide)"></p>
<p>(Left: original rendering. Right: after conversion with Subset Font Maker. Notice how the character spacing in the article title is unnaturally wide.)</p>
<p>After investigating the cause, it turned out that Subset Font Maker had stripped important <strong>OTF (OpenType Font) data</strong> during the conversion process -- specifically the information needed for character spacing adjustments (kerning, pair kerning) stored in tables like GPOS.</p>
<p>OTF data does more than define character shapes; it contains <strong>critical information for fine-tuning the visual appearance of text</strong>, such as automatically adjusting spacing between characters, or substituting specific character combinations with ligatures (GSUB table). Without this data, character spacing reverts to default values, causing the layout to break.</p>
<p>There's no point in reducing file size if the visual quality suffers. So I searched for a tool that could perform <strong>high-precision subsetting while preserving OTF data</strong>, and that's when I found the Python library "<strong>FontTools</strong>."</p>
<hr>
<p>:::ad</p>
<h2>3. The Solution: Step-by-Step Subsetting with FontTools</h2>
<p><strong>FontTools</strong> is a powerful suite of Python libraries for manipulating font files. It's open source and supports a wide range of advanced operations on TTF and OTF fonts -- subsetting, data editing, format conversion, and more. It's also conveniently usable from the command line.</p>
<p><a href="https://github.com/fonttools/fonttools">FontTools (GitHub Repository)</a></p>
<p>Here's how to create a Noto Sans JP subset using FontTools:</p>
<h3>3-1. Installing FontTools</h3>
<p>FontTools is a Python library, so first make sure Python is installed on your system. Once Python is available, run the following <code>pip</code> command in your terminal to install FontTools:</p>
<pre><code class="language-bash">pip install fonttools brotli zopfli
</code></pre>
<p><em>Installing <code>brotli</code> and <code>zopfli</code> alongside FontTools is recommended for WOFF2 support and higher compression ratios (via the <code>--with-zopfli</code> option).</em></p>
<h3>3-2. Running the Subsetting Command</h3>
<p>Next, use the <code>pyftsubset</code> command in your terminal to perform the subsetting. Navigate to the directory containing your original Noto Sans JP font file (e.g., <code>NotoSansJP-Regular.ttf</code>) and run the following command. (Adjust filenames and paths to match your environment.)</p>
<pre><code class="language-bash"># Example: Create a subset WOFF2 from NotoSansJP-Regular.ttf
pyftsubset NotoSansJP-Regular.ttf \
  --output-file=NotoSansJP-Regular.woff2 \
  --flavor=woff2 \
  --layout-features='*' \
  --unicodes='U+0020-007E,U+3000-30FF,U+4E00-9FAF,U+FF01-FF60,U+FF65-FF9F' \
  --with-zopfli \
  --verbose
</code></pre>
<p>For Bold or other weights, simply change the input and output filenames and run the command again.</p>
<p><strong>Command option breakdown:</strong></p>
<ul>
<li>
<p><strong><code>NotoSansJP-Regular.ttf</code></strong></p>
<p>The source font file (TTF or OTF) to subset.</p>
</li>
<li>
<p><strong><code>--output-file=NotoSansJP-Regular.woff2</code></strong></p>
<p>The output filename and path for the subset font.</p>
</li>
<li>
<p><strong><code>--flavor=woff2</code></strong></p>
<p>Specifies the output format as <strong>WOFF2</strong>. WOFF2 is optimized for web fonts and provides high compression ratios for smaller file sizes.</p>
</li>
<li>
<p><strong><code>--layout-features='*'</code></strong></p>
<p><strong>[KEY OPTION]</strong> This is what preserves OTF data. Setting <code>'*'</code> retains all OpenType layout feature information, including kerning and ligatures. This prevents the layout issues that occurred with Subset Font Maker.</p>
</li>
<li>
<p><strong><code>--unicodes='...'</code></strong></p>
<p><strong>[KEY OPTION]</strong> Specifies the Unicode ranges of characters to include in the subset. Characters outside these ranges will not be included and won't render properly (appearing as missing glyphs), so configure this carefully. The ranges in the example above cover:</p>
<ul>
<li><code>U+0020-007E</code>: Basic ASCII alphanumeric characters and symbols</li>
<li><code>U+3000-30FF</code>: Full-width space, punctuation, hiragana, katakana, etc.</li>
<li><code>U+4E00-9FAF</code>: CJK Unified Ideographs (covers most commonly used kanji)</li>
<li><code>U+FF01-FF60</code>: Some full-width alphanumeric characters and symbols</li>
<li><code>U+FF65-FF9F</code>: Half-width katakana</li>
</ul>
<p><em>Adjust these ranges based on the characters your site actually uses. For example, add corresponding Unicode ranges if you use specific symbols or less common kanji.</em></p>
</li>
<li>
<p><strong><code>--with-zopfli</code></strong></p>
<p>Uses Google's Zopfli compression algorithm for even higher WOFF2 compression. Effective for further file size reduction. (Requires the <code>zopfli</code> library.)</p>
</li>
<li>
<p><strong><code>--verbose</code></strong></p>
<p>Outputs detailed logs during the subsetting process. Useful for troubleshooting errors and verifying which data was included or excluded.</p>
</li>
</ul>
<h3>3-3. Loading the Generated WOFF2 File via CSS</h3>
<p>Once the command completes successfully, a lightweight WOFF2 subset font file will be generated at the specified location. Upload this file to your web server and update your CSS to load it using <code>@font-face</code> rules.</p>
<pre><code class="language-css">/* Regular weight */
@font-face {
  font-family: 'Noto Sans JP'; /* Specify the font name */
  /* Path to the generated WOFF2 file */
  src: url('/path/to/your/fonts/NotoSansJP-Regular.woff2') format('woff2');
  font-weight: 400; /* or normal */
  font-style: normal;
  /* font-display: swap; controls behavior during font loading. Recommended. */
  font-display: swap;
}

/* Bold weight */
@font-face {
  font-family: 'Noto Sans JP';
  /* Path to the bold WOFF2 file */
  src: url('/path/to/your/fonts/NotoSansJP-Bold.woff2') format('woff2');
  font-weight: 700; /* or bold */
  font-style: normal;
  font-display: swap;
}

/* Add additional weights as needed */

/* Apply the font to body or specific elements */
body {
  font-family: 'Noto Sans JP', sans-serif;
}
</code></pre>
<p>Your website will now use the lightweight subset font.</p>
<hr>
<p>:::ad</p>
<h2>4. Results: The Impact of FontTools Subsetting on Page Speed</h2>
<p>Here are the results of subsetting Noto Sans JP with FontTools:</p>
<p><img src="/90946f0b1716b602c5462dafbcffd828/04_%E3%82%B5%E3%83%96%E3%82%BB%E3%83%83%E3%83%88%E5%8C%96%E3%81%AB%E3%82%88%E3%82%8B%E5%AE%B9%E9%87%8F%E5%89%8A%E6%B8%9B.png" alt="Font file sizes after subsetting with FontTools (approximately 1.5MB each)"></p>
<p>(After subsetting with FontTools -- the file sizes are dramatically reduced!)</p>
<p>This reduction in file size means the browser downloads less data, resulting in faster page load times. The following network analysis shows the font loading times before and after the change.</p>
<p><strong>Before (without subsetting):</strong></p>
<p><img src="/730fffd840b22ada70f842d6eee15147/05_%E5%A4%89%E6%9B%B4%E5%89%8D%E3%81%AE%E3%83%8D%E3%83%83%E3%83%88%E3%83%AF%E3%83%BC%E3%82%AF%E5%88%86%E6%9E%90.png" alt="Font loading time before subsetting"></p>
<p><strong>After (subsetting with FontTools):</strong></p>
<p><img src="/635e4a74eb8fe92487999793128e351c/06_%E5%A4%89%E6%9B%B4%E5%BE%8C%E3%81%AE%E3%83%8D%E3%83%83%E3%83%88%E3%83%AF%E3%83%BC%E3%82%AF%E5%88%86%E6%9E%90.png" alt="Font loading time after subsetting with FontTools (improved)"></p>
<p>The reduced file size clearly results in shorter loading times. This directly contributes to improved page speed metrics like Core Web Vitals.</p>
<p>While FontTools is a command-line tool, once you understand the commands, it enables high-precision subsetting that preserves OTF data. If you're looking to improve your website's load speed -- especially if large Japanese font files are weighing it down -- I highly recommend giving FontTools subsetting a try. You can achieve even greater size reductions by further narrowing the character ranges in the <code>--unicodes</code> option.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/38ceaa1c1ec73ee22af828c487833ce2/20241205-NotoSansJP-%E3%82%B5%E3%83%96%E3%82%BB%E3%83%83%E3%83%88.jpg" medium="image"/></item><item><title><![CDATA[How I Cloned and Replaced My SSD to Expand My C Drive from 500GB to 2TB]]></title><description><![CDATA[A detailed guide on cloning and replacing an SSD to expand the C drive from 500GB to 2TB, including how to fix the Windows boot error 0xc000000e]]></description><link>https://uhiyama-lab.com/en/blog/dialy/ssd-clone-and-replacement-guide/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/dialy/ssd-clone-and-replacement-guide/</guid><category><![CDATA[hardware]]></category><pubDate>Sun, 01 Dec 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>After about 4 years of using my custom-built PC, I started noticing some concerning behaviors:</p>
<ul>
<li>Opening folders or displaying files in Explorer was taking unusually long.</li>
<li>During file operations like renaming, the display would briefly reset (frustratingly committing the input mid-edit).</li>
<li>Occasionally the entire PC would briefly freeze, with the Windows taskbar and other elements appearing to reload.</li>
</ul>
<p>I tried memory diagnostics, GPU driver updates, and deleting unnecessary files, but nothing helped. I was starting to think "Maybe it's time for a new PC..." But before giving up, I decided to check the state of my C drive -- the heart of the PC.</p>
<p><img src="/ca77ce0905962a9228dc90ae10d142ff/20241201-1_C%E3%83%89%E3%83%A9%E3%82%A4%E3%83%96%E6%8F%9B%E8%A3%85%E5%89%8D.png" alt="C drive at critically low capacity (showing red)"></p>
<p>Bright red! Less than 5% free space remaining -- a danger zone.</p>
<p>Generally, it's recommended to keep at least 10-15% free space on the C drive where Windows is installed. When free space is extremely low, it can interfere with Windows temporary file creation, paging file (virtual memory) allocation, and system updates, leading to system instability, performance degradation, slower read/write speeds (especially on SSDs), and various other issues.</p>
<p>So I decided to tackle this storage shortage by cloning the entire contents of my current C drive (a ~500GB M.2 SSD) to a new, larger SSD (a 2TB 2.5-inch SSD) and swapping them. While the capacity expansion ultimately succeeded, I ran into a nasty Windows boot issue along the way that gave me quite a headache.</p>
<p>This article documents the specific steps for the SSD clone and replacement, the tools I used, and how I dealt with the boot error "0xc000000e" -- as a reference for anyone facing similar C drive capacity issues or considering an SSD replacement.</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#preparation">Preparation</a></li>
<li><a href="#step1-clone">Step 1: Creating the SSD Clone</a></li>
<li><a href="#step2-media">Step 2: Creating Installation Media</a></li>
<li><a href="#step3-install">Step 3: SSD Installation and Boot Error Resolution</a></li>
<li><a href="#step4-partition">Step 4: Partition Expansion</a></li>
<li><a href="#result">Replacement Complete and Results</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Preparation {#preparation}</h2>
<p>Here are the main items and software I used for this SSD clone replacement:</p>
<ul>
<li>
<p>New SSD: Silicon Power 2TB SSD 3D NAND A58 (2.5-inch SATA)</p>
<ul>
<li>
<p>The destination SSD. Choose any capacity, form factor (M.2 NVMe, 2.5-inch SATA, etc.), and brand you prefer. I chose this one because it was on sale. Switching from M.2 to 2.5-inch is fine as long as your motherboard has a free SATA port.</p>
<p>:::post-link{url="<a href="https://amzn.to/3VicRQJ">https://amzn.to/3VicRQJ</a>" text="Silicon Power 2TB SSD 3D NAND A58 (Amazon)"}</p>
</li>
</ul>
</li>
<li>
<p>SSD Cloning Software: Macrium Reflect Free</p>
<ul>
<li>
<p>Software for copying (cloning) the entire contents of the original SSD to the new one. There are many options, but I chose this one because it was free and highly rated.</p>
<p>:::post-link{url="<a href="https://www.gigafree.net/system/SystemBackup/macriumreflectfreeedition.html">https://www.gigafree.net/system/SystemBackup/macriumreflectfreeedition.html</a>" text="Macrium Reflect Free (Mado no Mori)"}</p>
<p>:::post-link{url="<a href="https://note.com/combat_travor/n/n931acb354e2d">https://note.com/combat_travor/n/n931acb354e2d</a>" text="How to Clone an SSD Using Macrium Reflect Free (combat-travor)"}</p>
</li>
</ul>
</li>
<li>
<p>USB Drive: 16GB or larger</p>
<ul>
<li>
<p>Used for recovery if Windows fails to boot after the replacement. Needed to create Windows installation media. <strong>[Important]</strong> I strongly recommend creating this beforehand.</p>
<p>:::post-link{url="<a href="https://www.microsoft.com/ja-jp/software-download/windows10">https://www.microsoft.com/ja-jp/software-download/windows10</a>" text="Windows 10 Media Creation Tool (Microsoft)"}</p>
<p>:::post-link{url="<a href="https://www.microsoft.com/ja-jp/software-download/windows11">https://www.microsoft.com/ja-jp/software-download/windows11</a>" text="Windows 11 Media Creation Tool (Microsoft)"}</p>
</li>
</ul>
</li>
<li>
<p>Partition Management Software: Paragon Hard Disk Manager 15 (or equivalent)</p>
<ul>
<li>Used after the replacement to adjust (expand) the partition size so the new SSD's full capacity is available for the C drive. Windows' built-in "Disk Management" can also do this, but dedicated software offers more flexible operations. I used software I already had, but free partition management tools are also available.</li>
</ul>
</li>
<li>
<p>SATA-to-USB Adapter Cable/Enclosure (optional):</p>
<ul>
<li>Needed if you're doing the clone via USB before installing the new SSD internally. Not needed if you can connect it directly inside the PC.</li>
</ul>
</li>
</ul>
<p>You can use different SSDs and software than those listed above. Choose based on your setup and budget. However, I recommend creating Windows installation media in advance, just in case of boot issues.</p>
<p>*BCD (Boot Configuration Data) is a file that stores configuration information needed for Windows to boot (such as which disk to boot the OS from). If boot errors occur after an SSD replacement, one possible cause is that the BCD information no longer matches the new environment.</p>
<hr>
<p>:::ad</p>
<h2>Step 1: Creating the SSD Clone {#step1-clone}</h2>
<p>First, clone (copy) the entire contents of your current C drive to the new SSD.</p>
<ol>
<li>
<p>Connect the new SSD to your PC. (Use a SATA-to-USB adapter or connect to a free internal port)</p>
</li>
<li>
<p>Launch the cloning software (Macrium Reflect Free in this case).</p>
</li>
<li>
<p>Follow the software's instructions to select the source drive (current C drive) and destination drive (new SSD), and start the clone process.</p>
</li>
</ol>
<p><img src="/d1cfce82599748e4f398d586e01f74c0/20241201-2_SSD%E3%82%AF%E3%83%AD%E3%83%BC%E3%83%B3.png" alt="Macrium Reflect Free clone settings screen"></p>
<p>(For detailed steps, the reference article by combat-travor mentioned above is very helpful.)</p>
<p><img src="/2fbd03b60ab624fca7b40561c18e3016/20241201-3_SSD%E3%82%AF%E3%83%AD%E3%83%BC%E3%83%B3%E5%AE%8C%E4%BA%86.png" alt="Macrium Reflect Free clone completion screen"></p>
<p>In my environment, cloning the C drive with about 450GB of data took approximately 3 hours and 40 minutes. The time varies significantly depending on your setup and data volume.</p>
<p>[Tip] Partition Size During Cloning</p>
<p>Some cloning software lets you specify the destination partition size. Generally, it's safest to clone at the same size as the source. Leave the remaining capacity on the new SSD as "unallocated" space at the time of cloning. This unallocated space will be merged with the C drive later in Step 4.</p>
<p>The recommended approach is to set the partition size to match the original drive during cloning. Leave the remaining space on the larger SSD as "unallocated" for now.</p>
<p>After cloning is complete, verify using Windows' "Disk Management" (<code>diskmgmt.msc</code>) or partition management software that the new SSD has the same partition structure as the original C drive, with the remainder as unallocated space.</p>
<p><img src="/e86f44d8d9a5680fac1feb2ffeffccfb/20241201-4_SSD%E3%82%AF%E3%83%AD%E3%83%BC%E3%83%B3%E5%AE%8C%E4%BA%86%E7%A2%BA%E8%AA%8D.png" alt="Verifying the cloned SSD in partition management software"></p>
<p>(The original C drive contents have been copied, with the remainder shown as unallocated)</p>
<h2>Step 2: Creating Installation Media {#step2-media}</h2>
<p>Ideally, Windows would boot right up after the SSD swap, but boot errors can occur depending on your environment. In my case, I got the following blue screen error:</p>
<p><img src="/df609ed8b112b468c813a1f5d2784298/20241201-5_%E3%83%96%E3%83%AB%E3%83%BC%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%BC%E3%83%B3.png" alt="Blue screen with error code 0xc000000e"></p>
<p>Error code "0xc000000e" typically occurs when Windows cannot access the boot device (in this case, the new SSD) or when there's a problem with the Boot Configuration Data (BCD).</p>
<p>To repair this error, you need to launch Command Prompt from the Windows Recovery Environment. For this, create Windows installation media (USB drive) in advance.</p>
<ol>
<li>
<p>Download the "Media Creation Tool" from Microsoft's official website.</p>
<p>:::post-link{url="<a href="https://www.microsoft.com/ja-jp/software-download/windows10">https://www.microsoft.com/ja-jp/software-download/windows10</a>" text="Windows 10 Media Creation Tool (Microsoft)"}</p>
<p>:::post-link{url="<a href="https://www.microsoft.com/ja-jp/software-download/windows11">https://www.microsoft.com/ja-jp/software-download/windows11</a>" text="Windows 11 Media Creation Tool (Microsoft)"}</p>
</li>
<li>
<p>Connect a blank USB drive (16GB or larger) to your PC.</p>
</li>
<li>
<p>Run the downloaded Media Creation Tool and follow the on-screen instructions to select "Create installation media for another PC (USB flash drive, DVD, or ISO file)" and create the installation media on the USB drive.</p>
</li>
</ol>
<p><img src="/e1eace0b83cd5ecb0f2b799fa37b53ea/20241201-6_%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%BC%E3%83%AB%E3%83%A1%E3%83%87%E3%82%A3%E3%82%A2.png" alt="Windows Media Creation Tool screen"></p>
<p>With this USB drive, you can perform repair operations even if Windows won't boot.</p>
<p>:::post-link{url="<a href="https://jp.minitool.com/data-recovery/fix-error-0xc000000e.html">https://jp.minitool.com/data-recovery/fix-error-0xc000000e.html</a>" text="How to Fix Error Code 0xc000000e in Windows 10 (MiniTool)"}</p>
<p>:::post-link{url="<a href="https://drmpls.com/2023/07/21/replaced-the-ssd-cannot-boot/">https://drmpls.com/2023/07/21/replaced-the-ssd-cannot-boot/</a>" text="Causes and Solutions When a Cloned SSD Won't Boot (Dr. M Place)"}</p>
<h2>Step 3: SSD Installation and Boot Error Resolution {#step3-install}</h2>
<p>Once the cloning and installation media creation are done, it's time for the actual SSD swap.</p>
<ol>
<li>
<p>Completely power off the PC and unplug the power cable.</p>
</li>
<li>
<p>Open the PC case, remove the old SSD (original C drive), and install the new SSD (the cloned one). (Don't forget anti-static precautions!)</p>
</li>
<li>
<p>Close the PC case, connect the power cable, and turn on the PC.</p>
</li>
<li>
<p>Immediately after powering on (while the manufacturer logo is displayed), press the designated key (usually DEL or F2) to enter the BIOS (UEFI) settings screen.</p>
</li>
<li>
<p>In the BIOS settings, find the "Boot" menu and check the boot priority (Boot Options / Boot Priority). Verify that the newly installed SSD is recognized as "Windows Boot Manager" and is set as the first boot priority. (If it's not recognized or has a low priority, change the settings.)</p>
</li>
<li>
<p>Save the settings and exit BIOS to restart the PC.</p>
</li>
</ol>
<p>[Trouble] BIOS recognizes the drive but 0xc000000e error prevents booting!</p>
<p>This was the biggest hurdle in my case. Despite the BIOS correctly recognizing the new SSD as the boot drive, attempting to boot Windows would produce the blue screen error "0xc000000e."</p>
<p>Initially, I suspected a failed clone and re-cloned using different software (Paragon), but the result was the same.</p>
<p>[Solution] Boot from Windows Installation Media and Rebuild the BCD</p>
<p>Ultimately, I resolved the issue by following these steps using the Windows installation media (USB drive) created in Step 2. This process repairs and rebuilds the Windows Boot Configuration Data (BCD).</p>
<ol>
<li>
<p>With the PC powered off, connect the Windows installation media (USB drive).</p>
</li>
<li>
<p>Power on the PC and immediately enter the BIOS settings.</p>
</li>
<li>
<p>Change the boot priority to make the USB drive first, then save and restart.</p>
</li>
<li>
<p>When the Windows Setup screen boots from the USB drive, select "Repair your computer" > "Troubleshoot" > "Advanced options" > "Command Prompt."</p>
</li>
<li>
<p>In Command Prompt, execute the following commands in order. These repair the boot sector and rebuild the BCD. (See the reference articles below for detailed procedures.)</p>
<ul>
<li><code>bootrec /fixmbr</code></li>
<li><code>bootrec /fixboot</code> (see note below)</li>
<li><code>bootrec /scanos</code></li>
<li><code>bootrec /rebuildbcd</code></li>
</ul>
<p>(In some cases, you may also need to assign a drive letter to the EFI partition using the <code>diskpart</code> command.)</p>
</li>
</ol>
<p>:::post-link{url="<a href="https://freesoft.tvbok.com/tips/efi_installation/uefi_bootrec.html">https://freesoft.tvbok.com/tips/efi_installation/uefi_bootrec.html</a>" text="How to Repair UEFI from Command Prompt (Bokunchi no TV Annex)"}</p>
<p>[Note] "Access is denied" error with <code>bootrec /fixboot</code></p>
<p><img src="/63c80fbeba3531f3f128d2ff38e9e55e/20241201-7_fixboot.png" alt="Access denied error with bootrec /fixboot"></p>
<p>During the command execution, you may encounter an "Access is denied" error with <code>bootrec /fixboot</code>. This appears to be a common issue with recent Windows versions (especially when installed in UEFI mode).</p>
<p>In my case, I ignored this error and proceeded to execute <code>bootrec /scanos</code> and <code>bootrec /rebuildbcd</code>, which ultimately allowed Windows to boot successfully. The <code>fixboot</code> command is primarily used for repairing older MBR-format disks, and in UEFI environments, it may not be necessary or may require alternative steps (like EFI partition manipulation using <code>diskpart</code>).</p>
<p>If you encounter this error and Windows still won't boot after running <code>rebuildbcd</code>, you may need to try additional solutions described in the reference articles below.</p>
<p>:::post-link{url="<a href="https://itojisan.xyz/trouble/17752/">https://itojisan.xyz/trouble/17752/</a>" text="What to Do When 'bootrec /fixboot' Access Is Denied (IT HOOK)"}</p>
<p>Once the BCD rebuild succeeds, exit Command Prompt and restart the PC -- Windows should now boot from the new SSD!</p>
<h2>Step 4: Partition Expansion {#step4-partition}</h2>
<p>Once Windows boots successfully, it's time for the final step. Currently, the new SSD only has a partition the same size as the original C drive, with the remaining large capacity sitting as "unallocated" space. We need to merge this unallocated space with the C drive so the SSD's full capacity is usable.</p>
<p><img src="/6d48deef56eeee2a3c2f7a40c064b79c/20241201-8_C%E3%83%89%E3%83%A9%E3%82%A4%E3%83%96%E3%83%91%E3%83%BC%E3%83%86%E3%82%A3%E3%82%B7%E3%83%A7%E3%83%B3%E7%B5%B1%E5%90%88.png" alt="Partition management software showing C drive and unallocated space"></p>
<p>This can be done with Windows' built-in "Disk Management," but if a recovery partition or similar is sandwiched between them, expansion may not be possible. Using dedicated partition management software is more reliable.</p>
<p>I used "Paragon Hard Disk Manager 15," which I already had. Select a feature like "Resize/Move Partition," select the C drive partition, and expand it to absorb the unallocated space behind it, maximizing the size.</p>
<p><img src="/71715adb0af97fed3af5d433c055ee75/20241201-9_C%E3%83%89%E3%83%A9%E3%82%A4%E3%83%96%E5%AE%B9%E9%87%8F%E6%8B%A1%E5%BC%B5.png" alt="Expanding partition size with Paragon Hard Disk Manager"></p>
<p>After applying the settings, the PC usually requires a restart, and the partition change is processed on a pre-Windows boot screen. Once the processing completes and Windows boots again, the C drive capacity should be expanded to the new SSD's maximum capacity (approximately 2TB in this case).</p>
<p>:::ad</p>
<h2>Replacement Complete and Results {#result}</h2>
<p><img src="/38a6b84edea0c7f964cb9f58bc1d3b33/20241201-10_C%E3%83%89%E3%83%A9%E3%82%A4%E3%83%96%E6%8F%9B%E8%A3%85%E5%BE%8C.png" alt="C drive after expansion (plenty of free space)"></p>
<p>After going through all these steps, I successfully replaced the C drive SSD from 500GB to 2TB, dramatically expanding the capacity!</p>
<p>With sufficient free space on the C drive (over 15%), all the PC issues I'd been experiencing -- Explorer slowdowns, display redraws, brief freezes -- disappeared as if by magic. Indeed, insufficient C drive free space has a significant impact on system stability. Don't ignore a red drive indicator; address it early (whether by deleting unnecessary files or expanding capacity like I did).</p>
<p>SSD clone replacement isn't overly complicated in terms of procedure, but boot issues can arise depending on your environment. UEFI environments in particular seem prone to BCD-related problems. I hope the troubleshooting described in this article helps anyone facing the same issue. (Please note: all operations are at your own risk.)</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/481ebec3151d6e206b01a0511c1c7ace/20241201-SSD%E3%82%92%E3%82%AF%E3%83%AD%E3%83%BC%E3%83%B3%E6%8F%9B%E8%A3%85%E3%81%97%E3%81%A6C%E3%83%89%E3%83%A9%E3%82%A4%E3%83%96%E3%81%AE%E5%AE%B9%E9%87%8F%E3%82%92500GB%E3%81%8B%E3%82%892TB%E3%81%AB%E6%8B%A1%E5%BC%B5%E3%81%99%E3%82%8B.jpg" medium="image"/></item><item><title><![CDATA[Easy Cutscene Creation in RPG Maker MV/MZ + Camera Control with Galv_CamControl]]></title><description><![CDATA[How to create dramatic cutscenes in RPG Maker and enhance them with camera control using the Galv_CamControl plugin]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/rpgmaker-cutscene-camera/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/rpgmaker-cutscene-camera/</guid><category><![CDATA[rpgmaker]]></category><pubDate>Thu, 28 Nov 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="/8ee0d58ba150d836da5f6bae53388e76/07_%E3%83%AA%E3%83%83%E3%83%81%E3%82%AB%E3%83%83%E3%83%88%E3%82%B7%E3%83%BC%E3%83%B3.webp" alt="Example of an RPG Maker cutscene with camera effects"></p>
<p>When you want to add dramatic <strong>cutscenes</strong> (event scenes) to your RPG Maker game, you'll be pleasantly surprised -- compared to other game engines like Unity or Unreal Engine, <strong>RPG Maker</strong> makes it remarkably <strong>easy</strong> to create basic cutscenes.</p>
<p>This article covers everything from <strong>creating basic cutscenes using RPG Maker's built-in features</strong> to enhancing them with <strong>camera control</strong> for more expressive storytelling. For camera effects, we'll use the popular plugin "<strong>Galv_CamControl.js</strong>."</p>
<p>With these techniques, you can make your game's story more engaging and memorable for players.</p>
<p><strong>In this article</strong></p>
<ol>
<li>Creating basic cutscenes (RPG Maker built-in features)</li>
<li>Adding camera effects (Galv_CamControl plugin)</li>
<li>Completing a polished cutscene</li>
</ol>
<hr>
<p>:::ad</p>
<h2>Creating Basic Cutscenes (RPG Maker Built-in Features)</h2>
<p>Let's start with creating cutscenes using only RPG Maker's standard features -- no plugins required.</p>
<p><img src="/33abb1aa2c7206be5e0e3fc5ffbbe143/01_%E3%82%B7%E3%83%B3%E3%83%97%E3%83%AB%E3%81%AA%E3%82%AB%E3%83%83%E3%83%88%E3%82%B7%E3%83%BC%E3%83%B3.webp" alt="A simple cutscene created with only RPG Maker&#x27;s built-in features"></p>
<p>(You can create scenes like this using just the built-in features.)</p>
<p><img src="/405233ee3063576dadf2003d565fa565/02_%E3%82%B7%E3%83%B3%E3%83%97%E3%83%AB%E3%81%AA%E3%82%AB%E3%83%83%E3%83%88%E3%82%B7%E3%83%BC%E3%83%B3.png" alt="RPG Maker event command settings for a basic cutscene"></p>
<p>By combining the following basic event commands, you can easily create cutscenes with character conversations and movement:</p>
<ul>
<li><strong>Show Text:</strong> Displays character dialogue or narration. Combine with face graphics to clearly show who's speaking.</li>
<li><strong>Show Balloon Icon:</strong> Displays icons like "!" or "?" above characters' heads to visually express emotions or states.</li>
<li><strong>Set Movement Route:</strong> Moves event characters or the player along a specified path. You can also include direction changes, waits, and switch operations. Being able to specify target characters by "This Event" or by name is incredibly intuitive.</li>
<li><strong>Change Transparency:</strong> Temporarily hides the player character from the screen. Useful for transitions into and out of event scenes.</li>
</ul>
<p><strong>[Tip] Give Your Events Descriptive Names!</strong></p>
<p><img src="/bd2bfa8b12be53b3705d680235da1348/03_%E3%82%A4%E3%83%99%E3%83%B3%E3%83%88%E3%81%AB%E3%81%AF%E5%90%8D%E5%89%8D%E3%82%92%E3%81%A4%E3%81%91%E3%82%88%E3%81%86.png" alt="Setting an event name in the event settings screen"></p>
<p><img src="/4175f05307ffed7ba493b095bf6dbe61/04_%E3%82%A4%E3%83%99%E3%83%B3%E3%83%88%E3%81%AB%E3%81%AF%E5%90%8D%E5%89%8D%E3%82%92%E3%81%A4%E3%81%91%E3%82%88%E3%81%86_2.png" alt="Selecting events by name in Set Movement Route"></p>
<p>For those with experience in other game engines, RPG Maker's event creation features -- particularly the character targeting (e.g., "Player," "This Event," "Event ID: 1") and movement route setup -- are surprisingly polished and streamlined.</p>
<p>One important practice: <strong>always give descriptive names to events placed on the map</strong> (e.g., "VillagerA," "Chest_Cave"). Since you can select events by name when setting up event commands, it makes management much easier. Even for solo development, this saves you from the "which event was that again?" confusion during later debugging and modifications, helping your future self. As game development progresses and data grows more complex, these small habits make a big difference.</p>
<p>Now, while built-in features alone can create cutscenes, they may start to feel a bit monotonous with the same fixed camera angle throughout. Let's add some camera work to spice things up.</p>
<p>:::ad</p>
<h2>Adding Camera Effects (Galv_CamControl Plugin)</h2>
<p>Adding <strong>camera control</strong> to your basic cutscenes can greatly enhance immersion, guide the player's attention, and create more impressive visual storytelling. In RPG Maker MV/MZ, the standard approach for camera control is to use a plugin.</p>
<p>We'll be using "<strong>Galv_CamControl.js</strong>," a well-established camera control plugin used by many RPG Maker developers.</p>
<p>Plugin source: <a href="https://galvs-scripts.com/2015/11/27/mv-cam-control/">Galv's Scripts - MV Cam Control</a> (<em>For MZ, you may need a community-modified version. Please search and verify separately.</em>)</p>
<p><strong>[Installing the Plugin]</strong></p>
<ol>
<li>Download the plugin file (<code>Galv_CamControl.js</code>) from the above site (or find a compatible version).</li>
<li>Copy the downloaded file into the <code>js/plugins</code> folder of your RPG Maker project.</li>
<li>Open the RPG Maker editor and go to "Tools" > "Plugin Manager."</li>
<li>In the Plugin Manager window, double-click an empty row, select "Galv_CamControl" from the "Name" dropdown, and click "OK."</li>
<li>Make sure the status is set to "ON" and click "OK" to close the Plugin Manager.</li>
</ol>
<p>The plugin is now active, and you can execute camera control commands through the "Plugin Command" event command.</p>
<p><img src="/b971ff55f19c0b732b2bf6059d2d1d17/06_Galv_CamControl.png" alt="Galv_CamControl plugin command input example"></p>
<h3>Basic Usage of Galv_CamControl (Plugin Commands)</h3>
<p>You can control the camera by executing the following commands via the "Plugin Command" event command. (These are MV-based syntax examples. The format may differ slightly for MZ or specific modified versions -- check the plugin's help documentation.)</p>
<ul>
<li><strong>Focus the camera on a specific event (follow it):</strong>
Example: <code>CAM EVENT 1</code> (focuses camera on Event ID 1)
Example: <code>CAM EVENT 2</code> (focuses camera on Event ID 2)</li>
<li><strong>Return the camera to the player (follow the player):</strong>
<code>CAM PLAYER</code></li>
<li><strong>Move the camera to specified map coordinates:</strong>
<code>CAM MAP X Y</code> (Example: <code>CAM MAP 10 15</code> moves the camera to coordinates (10, 15))</li>
<li><strong>Unlock camera tracking (fix at current position):</strong>
<code>CAM TARGET 0</code></li>
</ul>
<p><strong>[Note: Camera Scroll Duration]</strong>
You can add a <strong>number (frame count) after a space</strong> following any of the above commands to make the camera scroll smoothly over that duration. For example, <code>CAM EVENT 1 60</code> will smoothly scroll the camera to Event 1 over 60 frames (1 second). If no duration is specified, the camera moves instantly.</p>
<p>Combining these commands with event commands like "Show Text," "Set Movement Route," and "Wait" lets you focus the camera on whoever is speaking, highlight important items or locations, and more.</p>
<p><strong>[Advanced Application: "Show, Don't Tell" Game Design]</strong></p>
<p><img src="/b567a12e9cebceb0065275543d267c0f/06-1-%E5%AE%9D%E7%AE%B1%E3%82%A4%E3%83%99%E3%83%B3%E3%83%88.webp" alt="Camera panning to a treasure chest during an event"></p>
<p>Good game design goes beyond telling players "go here" or "pick up that." By <strong>briefly showing the destination or target object with the camera</strong>, you guide players to understand intuitively what to do next.</p>
<p>Camera control plugins like Galv_CamControl are excellent not just for cutscene effects but also for implementing this "<strong>show, don't tell</strong>" design approach. For example, when an NPC gives a quest, you can briefly pan the camera to the objective or relevant object. This makes it clearer for players what to do next, improving the overall gameplay experience.</p>
<p><strong>Advanced Technique: Standoff Scenes</strong>
For tense scenes where characters face each other, you can place a transparent empty event between the two characters and lock the camera on that event (e.g., <strong>CAM EVENT [empty event ID] [duration]</strong> followed by <strong>CAM TARGET 0</strong>). This positions the characters at opposite edges of the screen with empty space in the center, creating a cinematic composition.</p>
<p>:::ad</p>
<h2>Completing a Polished Cutscene</h2>
<p><img src="/8ee0d58ba150d836da5f6bae53388e76/07_%E3%83%AA%E3%83%83%E3%83%81%E3%82%AB%E3%83%83%E3%83%88%E3%82%B7%E3%83%BC%E3%83%B3.webp" alt="Completed RPG Maker cutscene enhanced with camera effects"></p>
<p>By combining basic event commands (dialogue, movement, balloon icons, etc.) with <strong>camera control</strong> via <strong>Galv_CamControl</strong>, you can see how the initially simple cutscene becomes much more dynamic and visually engaging.</p>
<p>In 2D-based games like those made with RPG Maker, event scenes can easily feel monotonous without visual movement. By effectively moving the camera, you can guide the player's viewpoint, emphasize the importance of a scene, deliver emotional impact, and keep players engaged throughout.</p>
<p>Combining RPG Maker's accessible event creation features with plugin-based camera control is a straightforward way to make your game's storytelling significantly more compelling.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/8ee0d58ba150d836da5f6bae53388e76/07_%E3%83%AA%E3%83%83%E3%83%81%E3%82%AB%E3%83%83%E3%83%88%E3%82%B7%E3%83%BC%E3%83%B3.webp" medium="image"/></item><item><title><![CDATA[My First VRChat Avatar Purchase and Customization: A Step-by-Step Guide with the Manuka Model]]></title><description><![CDATA[A VRChat beginner's journey through first avatar customization with the popular Manuka model. Detailed guide covering VCC setup, outfit changes, eye and hair modifications]]></description><link>https://uhiyama-lab.com/en/blog/dialy/vrchat-first-model-import/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/dialy/vrchat-first-model-import/</guid><category><![CDATA[vrchat]]></category><pubDate>Thu, 21 Nov 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="/04e252ad57f1cd36e48091df825367e8/20241121-0-VRChat%E3%81%AF%E3%81%98%E3%82%81%E3%81%A6%E3%81%AE%E3%83%A2%E3%83%87%E3%83%AB%E5%B0%8E%E5%85%A51.webp" alt="Customized Manuka avatar in VRChat"></p>
<p>Are you interested in VRChat but wondering "How do I get an avatar?" or "Customization sounds complicated..."? Actually, the process of importing and customizing VRChat avatars has become much more straightforward and accessible compared to before.</p>
<p>This article documents my experience as a VRChat beginner purchasing the popular avatar model "Manuka" on Booth and attempting my first avatar customization -- outfit changes, texture color and eye modifications, hairstyle changes, and more.</p>
<p>In particular, the official tool "VCC (VRChat Creator Companion)" has made the avatar uploading process through Unity remarkably simple. I hope this article serves as a helpful reference for anyone looking to start using avatars in VRChat or try their hand at avatar customization.</p>
<p>(I started VRChat avatar customization as part of my 3D modeling learning journey. Here's my planned learning path:)</p>
<ol>
<li>Basic customization of existing purchased avatars (outfit changes, color modifications, simple gimmicks, etc.) -- The focus of this article</li>
<li>Importing VRoid Studio-created models into VRChat (practicing texture creation, etc.)</li>
<li>Full-scratch model creation in Blender (ultimate goal)</li>
</ol>
<p>This time, I'll share a detailed account of Step 1 -- the experience, procedures, and stumbling blocks I encountered.</p>
<p>:::ad</p>
<p>:::toc</p>
<ul>
<li><a href="#manuka-import">Purchasing and Importing Manuka</a></li>
<li><a href="#clothing">Outfit Changes</a></li>
<li><a href="#color-change">Eye and Clothing Color Changes</a></li>
<li><a href="#hairstyle">Hairstyle Change</a></li>
<li><a href="#summary">Summary</a></li>
<li><a href="#bonus">Bonus: World Creation Mishaps</a></li>
</ul>
<p>:::</p>
<p>:::ad</p>
<h2>Purchasing and Importing Manuka {#manuka-import}</h2>
<p><img src="/3e56556e7a8f6ffd12a5ac77698c77f6/20241121-1-%E3%83%9E%E3%83%8C%E3%82%AB%E3%83%A2%E3%83%87%E3%83%AB%E5%B0%8E%E5%85%A5.webp" alt="Imported Manuka model in Unity"></p>
<p>For my first avatar customization base, I chose the hugely popular 3D model "Manuka," created and sold by Studio DINGO.</p>
<p>:::post-link{url="<a href="https://jingo1016.booth.pm/items/5058077">https://jingo1016.booth.pm/items/5058077</a>" text="Original 3D Model 'Manuka' ver1.02 (BOOTH)"}</p>
<p>I happened to be looking for an avatar right when the creator was running a Black Friday sale (50% OFF!) on X (formerly Twitter), which sealed the deal. As expected from such a popular model, the quality is excellent and she's adorable!</p>
<p>[Tips] Sales on Booth: Booth, where many VRChat avatars and outfits are sold, occasionally has individual creator sales. If you find a model you like, following the creator's X account might help you catch special deals.</p>
<p>For the avatar import process (importing into Unity and uploading to VRChat), this article from "Metacul Frontier" was extremely helpful. It features detailed explanations with screenshots, making it a great starting point for beginners.</p>
<p>:::post-link{url="<a href="https://metacul-frontier.com/?p=15582">https://metacul-frontier.com/?p=15582</a>" text="[Complete Guide] How to Upload and Import VRChat Avatars (Metacul Frontier)"}</p>
<h3>VCC (VRChat Creator Companion) is Incredibly Convenient!</h3>
<p><img src="/17816428c87b0cd955e9cf4fef02d2e1/20241121-2-VCC%E3%83%A1%E3%83%8B%E3%83%A5%E3%83%BC%E7%94%BB%E9%9D%A2.png" alt="VRChat Creator Companion (VCC) menu screen"></p>
<p>What impressed me most during this avatar import was the VRChat official tool "VCC (VRChat Creator Companion)." I remember the Unity setup and VRChat component installation being much more complicated back in 2018, but VCC made it surprisingly easy.</p>
<p>Just select what you want to create (avatar or world) in VCC and create a new project -- everything you need gets automatically set up in a Unity project. It eliminates the hassle of manual configuration through Unity Hub and is incredibly stress-free. The project management screen is also clean and easy to read, making it a tremendous help for beginners.</p>
<p><img src="/e63282bb7d5d80559db01dd7c3b219d5/20241121-3-VCC%E3%83%A1%E3%83%8B%E3%83%A5%E3%83%BC%E7%94%BB%E9%9D%A22.png" alt="VCC project management screen"></p>
<p>:::ad</p>
<h2>Outfit Changes {#clothing}</h2>
<p><img src="/d244c0e175c1b4828105191a654a03a2/20241121-4-%E6%9C%8D%E3%81%AE%E5%B0%8E%E5%85%A5.webp" alt="Purchased outfit on Manuka"></p>
<p>With the avatar imported, it's time for the first step of customization: changing outfits. Normally, third-party outfit assets need fine-tuning in Unity to match the avatar's body shape, but for popular avatars like "Manuka," many creators sell "Manuka-compatible" outfits on Booth that are already optimized for her. This is extremely helpful for beginners.</p>
<p>This time, I wanted to shift from the default energetic vibe to something more subdued and mysterious, so I was looking for oversized techwear. That's when I found "Manuka-Compatible Outfit [Tech Wear]" by hajimata General Store.</p>
<p>:::post-link{url="<a href="https://booth.pm/ja/items/6288909">https://booth.pm/ja/items/6288909</a>" text="Manuka-Compatible Outfit [Tech Wear] (BOOTH)"}</p>
<p>By sheer coincidence, it was released on the same day I was searching, so I bought it immediately. The design and quality are outstanding!</p>
<p><img src="/a691699b1288e3b51a0f614e015aa9a5/20241121-5-%E6%9C%8D%E8%A3%85%E5%B0%8E%E5%85%A5Unity.png" alt="Putting Tech Wear on Manuka in Unity"></p>
<p>Since it's a "Manuka-compatible" outfit, the import is very straightforward. Basically, you just drag the purchased outfit's Prefab (a pre-configured component set) into the Manuka avatar object in Unity's Hierarchy window. (Hide the original outfit and unnecessary parts. This time, I also hid the animal ears.)</p>
<p>For outfit changing methods, the "Metacul Frontier" article mentioned earlier was also very helpful.</p>
<p>:::post-link{url="<a href="https://metacul-frontier.com/?p=13689">https://metacul-frontier.com/?p=13689</a>" text="How to Easily Customize Outfits Bought on BOOTH in Unity (Metacul Frontier)"}</p>
<h3>[Pitfall] Accessory Tracking Setup Mistake</h3>
<p>I had a small mishap with the placement of the tactical goggles that came with the Tech Wear.</p>
<p>Initially, I placed the goggles Prefab at the same hierarchy level as the outfit body, so the goggles tracked the body's movement but not the head's movement, causing them to clip inside the head.</p>
<p><img src="/4a2ca6e7dd9c8ca13996c7be50e020ee/20241121-6-%E6%9C%8D%E3%83%90%E3%82%B0.webp" alt="Goggles clipping into the head -- a failed example"></p>
<p>[Solution] Accessories like goggles that need to follow the head must be placed as a child object of the "Head" bone within the avatar's bone structure (Armature). Specifically, move the goggles Prefab under <code>Armature/Hips/Spine/Chest/Neck/Head</code>. This makes the goggles move with the head.</p>
<p><img src="/2bd90af32027d5f82ed6807d839ae130/20241121-7-%E3%83%A1%E3%82%AC%E3%83%8D%E8%A8%AD%E7%BD%AE%E4%BD%8D%E7%BD%AE.png" alt="Correct goggles placement (under the Head bone)"></p>
<p>Understanding the avatar's parent-child relationship (hierarchy structure) is key to properly placing accessories.</p>
<p>:::ad</p>
<h2>Eye and Clothing Color Changes {#color-change}</h2>
<p>After the outfit, I modified the eye and clothing colors, which greatly affect the avatar's overall impression.</p>
<p>The easiest way to change eyes is to purchase and swap in eye textures sold on Booth. However, this time I tried directly editing Manuka's original face texture file as practice for future 3D model creation.</p>
<p>[Steps]</p>
<ol>
<li>Find the face texture file (.png or .psd, etc.) for Manuka in the Unity project and copy it to a working folder.</li>
<li>Open the copied texture file in an image editor like Photoshop or CLIP STUDIO PAINT and redraw the eye area. (This time, I changed the pupil shape and made the color green.)</li>
<li>Export the edited texture as a PNG file.</li>
<li>Save the exported PNG file to the same folder as the original texture file in the Unity project, <strong>overwriting with the same filename</strong>.</li>
</ol>
<p>When you overwrite the file, Unity automatically detects the change and applies it to the avatar's appearance. Very convenient.</p>
<p><img src="/000588ef4588e51af30d3668a3be9981/20241121-8-%E7%9E%B3%E3%81%AE%E6%8F%8F%E3%81%8D%E5%A4%89%E3%81%88.png" alt="Before and after comparison of eye texture editing"></p>
<h3>[Pitfall] Highlights Still Showing After Deletion? -- The Shape Key Trap</h3>
<p>After updating the eye texture, I encountered a puzzling issue: "Wait, the eye highlights and clover pattern I deleted in the texture editor are still showing...?"</p>
<p>[Cause and Solution] These elements were controlled by "Shape Keys," not by the texture. Shape Keys are a feature that lets you change the shape of specific parts of an avatar using sliders, commonly used for facial expressions (eye blinks, mouth movements) and toggling small decorative details on/off.</p>
<p>In Unity's Inspector window, select the avatar's face or body mesh (Skinned Mesh Renderer) and check the "BlendShapes" (Shape Keys) section. If there's a slider with a name corresponding to the highlights or pattern, adjusting its value removes the display. (In other words, I didn't need to remove them from the texture at all!)</p>
<p><img src="/ad726e29508c49d004596d6969c080a6/20241121-9-%E3%82%B7%E3%82%A7%E3%82%A4%E3%83%97%E3%82%AD%E3%83%BC.png" alt="Adjusting Shape Keys (BlendShapes) in Unity&#x27;s Inspector"></p>
<p>While I was at it, I also adjusted the shape key that slightly lowers the eyelids for a more subdued look. Shape Keys are an incredibly powerful feature for giving your avatar personality.</p>
<p>For the clothing color change, I overlaid a navy-colored layer on the original black clothing texture to match the eye color, then used the same overwrite method. For more elaborate customization, you could also draw additional patterns.</p>
<p><img src="/4163ea9a3ffc75c1a1668e62e23d9a51/20241121-10-%E7%9E%B3%E3%81%AE%E4%BF%AE%E6%AD%A3.webp" alt="Manuka after eye and clothing color modifications"></p>
<p>This brought me much closer to the "calm and mysterious" look I was going for.</p>
<p>:::ad</p>
<h2>Hairstyle Change {#hairstyle}</h2>
<p><img src="/1499a56468bf4a472749366717539cfb/20241121-11-%E9%AB%AA%E5%9E%8B%E3%81%AE%E5%A4%89%E6%9B%B4.png" alt="Manuka with a new hairstyle"></p>
<p>For the finishing touch, I changed the hairstyle too. Once again, "Manuka-compatible" hairstyle assets were abundantly available on Booth, and I quickly found one that matched my vision. That's the advantage of a popular avatar.</p>
<p>This time, I purchased "Ear-out Bob" by Kayasutoa.</p>
<p>:::post-link{url="<a href="https://kayastore.booth.pm/items/5061681">https://kayastore.booth.pm/items/5061681</a>" text="Ear-out Bob [Manuka Hairstyle] (BOOTH)"}</p>
<p>Manuka's default bun hairstyle is cute too, but I felt the bob's subdued silhouette better fit this concept. (I kept the included ribbon as an accent and enlarged it to maximum size.)</p>
<p><img src="/04e252ad57f1cd36e48091df825367e8/20241121-0-VRChat%E3%81%AF%E3%81%98%E3%82%81%E3%81%A6%E3%81%AE%E3%83%A2%E3%83%87%E3%83%AB%E5%B0%8E%E5%85%A51.webp" alt="Completed Manuka with new hairstyle and ribbon"></p>
<h3>[Pitfall] Error During Hairstyle Setup</h3>
<p>Similar to the outfit, I encountered an error when trying to set up the purchased hairstyle Prefab by placing it into the avatar object. (This particularly happens when using tools like Modular Avatar.)</p>
<p><img src="/4b3a6db490d7d24fd68d569d8f3073b5/20241121-12-%E9%AB%AA%E3%82%BB%E3%83%83%E3%83%88%E3%82%A2%E3%83%83%E3%83%97%E3%82%A8%E3%83%A9%E3%83%BC.png" alt="Example error message during hairstyle setup"></p>
<p>[Solution] In such cases, you can add "MA Bone Proxy" (a component included in Modular Avatar) to the hairstyle object and specify the avatar's "Head" bone in the component's "Bone Reference." This ensures the hair properly follows the head. (Once this is set up, you don't need to re-run the setup tool.)</p>
<p><img src="/87aecd88713f00e7041817b0c5090116/20241121-13-%E9%AB%AA%E8%A8%AD%E5%AE%9A.png" alt="MA Bone Proxy component settings example"></p>
<p>Different assets may have different import methods, so it's important to carefully read the included Readme documentation.</p>
<p>:::ad</p>
<h2>Summary {#summary}</h2>
<p><img src="/fe7d380b3ff9e3c805dbfdfaf6cc6f54/20241121-15-%E3%81%BE%E3%81%A8%E3%82%81.jpg" alt="Group photo of customized avatars"></p>
<p>This time, as my "first VRChat model import and customization," I used the popular avatar "Manuka" as a base to try purchasing and importing outfits and hairstyles, as well as editing eye and clothing textures and colors.</p>
<p>Thanks to the VRChat official tool "VCC," the abundance of compatible assets on Booth, and the many clear tutorial articles left by pioneers, even a beginner like me was able to proceed through avatar customization relatively smoothly and enjoyably. I'm grateful to everyone involved.</p>
<p>I think I managed to achieve the "calm and mysterious" Manuka I was aiming for, at least to some extent.</p>
<p>The road to my ultimate goal of full-scratch model creation in Blender is still long, but through customization work like this, I've been gradually deepening my understanding of texture editing, Unity operations, and avatar structure (Armature and Shape Keys). If I keep trying things like VRoid model imports, I feel like I might eventually reach that goal.</p>
<p>Above all, watching my avatar transform through my own work was incredibly fun. I hope this article serves as some kind of reference for anyone about to step into the world of VRChat avatars or attempt their first avatar customization. Next time, I'd like to try more complex customizations like adding gimmicks!</p>
<h2>Bonus: World Creation Mishaps {#bonus}</h2>
<p><img src="/b8889cc59dd932994971a3cc3042648b/20241121-14-%E3%83%86%E3%82%B9%E3%83%88%E3%83%AF%E3%83%BC%E3%83%AB%E3%83%89%E4%BD%9C%E6%88%90.webp" alt="Failed attempt in a self-made world"></p>
<p>Alongside avatar customization, I also briefly tried my hand at VRChat "world creation." My naive thought of "If I'm making a world, it should be an obstacle course!" led me straight into a wall.</p>
<p>The most challenging part was the "getting players to ride on moving platforms" mechanic. Even when placing moving platforms using normal game development techniques, players in VRChat don't follow the platform's movement and simply fall through. This is because VRChat is an online game that needs to constantly sync position data for multiple players, so simple physics alone doesn't cut it.</p>
<p>Apparently, workarounds using sit gimmicks exist, but implementing moving platforms requires quite technical implementation. I'll start by learning basic world building first.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/7e0f2208b67d10ff09e10114cab72ef8/Thumb_20241121-VRChat%E3%81%AF%E3%81%98%E3%82%81%E3%81%A6%E3%81%AE%E3%83%A2%E3%83%87%E3%83%AB%E5%B0%8E%E5%85%A5.jpg" medium="image"/></item><item><title><![CDATA[RPG Maker MV/MZ Plugin: Auto-Switch Player Characters Per Map with UM_MapActorSetting.js]]></title><description><![CDATA[A guide for the UM_MapActorSetting.js plugin that automatically switches the player character per map in RPG Maker, with usage instructions and full source code]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/rpgmaker-plugin-mapactorsetting/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/rpgmaker-plugin-mapactorsetting/</guid><category><![CDATA[rpgmaker]]></category><pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Have you ever wanted to create a game in <strong>RPG Maker MV</strong> or <strong>RPG Maker MZ</strong> where multiple protagonists appear, or where a specific character is controlled on specific maps?</p>
<p>Normally, to switch the player character per map, you need to place a "Change Party Leader" command inside each map transfer or location move event. However, as the number of target maps grows, this method has several issues:</p>
<ul>
<li>It's <strong>time-consuming</strong> to set up and copy events for each map.</li>
<li>It's prone to <strong>bugs</strong> from missed settings or incorrect character assignments.</li>
<li>It becomes <strong>difficult to review and manage</strong> which character leads on which map.</li>
</ul>
<p>To solve these issues, I created the plugin "<strong>UM_MapActorSetting.js</strong>" -- <strong>simply add a tag to a map's note field, and the player character automatically switches to the specified actor when entering that map</strong>.</p>
<p>This article covers the plugin's features, installation method, configuration steps, and specific usage examples. It's recommended for anyone who wants a simpler way to implement character switching without event commands.</p>
<p><img src="/2709b732d5d65a079ea64dff8cd593c1/20241028-%E3%83%9E%E3%83%83%E3%83%97%E5%8D%98%E4%BD%8D%E3%81%A7%E6%93%8D%E4%BD%9C%E3%82%AD%E3%83%A3%E3%83%A9%E3%82%92%E5%A4%89%E6%9B%B4%E3%81%99%E3%82%8B.webp" alt="Player character automatically switches when moving between maps"></p>
<p>:::ad</p>
<hr>
<p><strong>In this article</strong></p>
<ol>
<li>Plugin Overview and Benefits</li>
<li>Plugin Installation</li>
<li>Basic Setup (Parameters and Map Notes)</li>
<li>Usage Example (Switching Between Hero and Princess)</li>
<li>Usage Notes</li>
<li>Plugin Code (UM_MapActorSetting.js)</li>
</ol>
<hr>
<h2>Plugin Overview and Benefits</h2>
<p>As mentioned above, the standard way to switch player characters per map in RPG Maker MV/MZ is through the "Change Party Leader" event command. However, this method becomes cumbersome and hard to manage as the number of maps increases.</p>
<p>The <strong>UM_MapActorSetting.js</strong> plugin was developed to solve this problem.</p>
<p><strong>Key features and benefits:</strong></p>
<ul>
<li><strong>Specify in the map's note field:</strong> Which character to control on which map is specified with a simple tag (e.g., <code>&#x3C;Player:hero></code>) in each map's "Note" field.</li>
<li><strong>Automatic switching:</strong> When the player enters a map with a note tag set, the plugin automatically swaps the party leader (player character) to the specified actor.</li>
<li><strong>No events needed:</strong> You no longer need to create and place switching events on each map.</li>
<li><strong>Simple configuration:</strong> Just link an "identifier" to an "Actor ID" in the plugin parameters and write the identifier in the map notes. It's easy to set up, review, and modify later.</li>
</ul>
<p>This makes developing games with multiple switchable characters much smoother.</p>
<p>:::ad</p>
<h2>Plugin Installation</h2>
<p>Installing the plugin is straightforward.</p>
<ol>
<li>Copy the plugin code at the bottom of this article, paste it into a text editor (such as Notepad), and save the file as <code>UM_MapActorSetting.js</code>.</li>
<li>Place the created <code>UM_MapActorSetting.js</code> file in the <code>js/plugins</code> folder within your RPG Maker project directory.</li>
<li>Open the RPG Maker Editor and go to "Tools" menu > "Plugin Manager."</li>
<li>Double-click an empty row in the plugin list, select <code>UM_MapActorSetting</code> from the "Name" dropdown, set "Status" to "ON," and click "OK."</li>
</ol>
<p><img src="/7f951310894d95dfc0405fc1e21bb0e1/20241028-01-%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E5%B0%8E%E5%85%A5.png" alt="Enabling UM_MapActorSetting in the RPG Maker MV/MZ Plugin Manager"></p>
<p>The plugin is now active.</p>
<h2>Basic Setup (Parameters and Map Notes)</h2>
<p>Next, configure the plugin to work correctly.</p>
<h3>Plugin Parameter Settings</h3>
<p>First, link the "identifiers (arbitrary strings)" used in map note fields with the actual "Actors (characters)" to switch to.</p>
<ol>
<li>Double-click (or select and press Enter) <code>UM_MapActorSetting</code> in the Plugin Manager to open the "Parameters" section on the right.</li>
<li>Double-click the "ActorSettings" (Actor Settings List) item.</li>
<li>Double-click an empty row (or use the "+" button) to add a new entry.</li>
<li>In the new entry, configure these two items:
<ul>
<li><strong>Character ID:</strong> Enter an arbitrary identifier to use in map note tags (alphanumeric characters recommended, e.g., <code>hero</code>, <code>princess</code>, <code>actor1</code>, etc.).</li>
<li><strong>Actor:</strong> Select the actor you want to associate with this identifier from the database using the dropdown menu on the right.</li>
</ul>
</li>
<li>Add entries for as many characters as you want to switch between.</li>
<li>Click "OK" when done.</li>
</ol>
<p><img src="/b93ad0b91e252cbd4bd915a57e296858/20241028-02-%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E5%B0%8E%E5%85%A5_1.png" alt="Opening the ActorSettings plugin parameter screen"></p>
<p><img src="/c6ba3b7766bc9cbebd5fcac101362d5b/20241028-03-%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E5%B0%8E%E5%85%A5_1.png" alt="Configuring Character ID and Actor"></p>
<h3>Map Note Settings</h3>
<p>Next, configure the maps where you want the player character to automatically switch.</p>
<ol>
<li>In the RPG Maker Editor, open the map where you want automatic character switching.</li>
<li>Right-click the map name (or select the map in edit mode) and open "Map Settings" (or "Map Properties").</li>
<li>In the "<strong>Note</strong>" field at the bottom right, enter a tag in the following format:</li>
</ol>
<pre><code>&#x3C;Player:identifier>
</code></pre>
<p>Replace <code>identifier</code> with the exact "<strong>Character ID</strong>" you set in the plugin parameters. (e.g., <code>&#x3C;Player:hero></code>)</p>
<ol start="4">
<li>Click "OK" when done.</li>
</ol>
<p><img src="/bc19bad503e966263002a688567dde38/20241028-04-%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E5%B0%8E%E5%85%A5_1.png" alt="Writing a tag in the map settings&#x27; Note field"></p>
<p>Now, when the player moves to this map, the actor corresponding to the identifier specified in the <code>&#x3C;Player:identifier></code> tag will automatically become the party leader (player character).</p>
<p>:::ad</p>
<h2>Usage Example (Switching Between Hero and Princess)</h2>
<p>As an example, let's say "Actor ID: 1" is "Hero" and "Actor ID: 2" is "Princess" in the database.</p>
<h3>1. Plugin Parameter Settings:</h3>
<p>Configure the following in the Plugin Manager:</p>
<p><img src="/db09b103f06d9e74f3c4ed8966bd44e1/20241028-05-%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E5%B0%8E%E5%85%A5_1.png" alt="Parameter settings example for Hero and Princess"></p>
<ul>
<li><strong>Entry 1:</strong>
<ul>
<li>Character ID: <code>hero</code></li>
<li>Actor: <code>0001: Hero</code> (Actor ID 1 in the database)</li>
</ul>
</li>
<li><strong>Entry 2:</strong>
<ul>
<li>Character ID: <code>princess</code></li>
<li>Actor: <code>0002: Princess</code> (Actor ID 2 in the database)</li>
</ul>
</li>
</ul>
<p><img src="/65550e395a3c4efc7a3cc8c3e5dc8061/20241028-06-%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E5%B0%8E%E5%85%A5_1.png" alt="Plugin parameter list after configuration"></p>
<h3>2. Map Note Settings:</h3>
<p>For example, set up notes for two maps as follows:</p>
<ul>
<li><strong>Map 1 (Hero's Town) Note field:</strong></li>
</ul>
<pre><code>&#x3C;Player:hero>
</code></pre>
<ul>
<li><strong>Map 2 (Castle) Note field:</strong></li>
</ul>
<pre><code>&#x3C;Player:princess>
</code></pre>
<p><strong>Result:</strong></p>
<ul>
<li>When the player enters the "Hero's Town" map, the player character automatically switches to "Hero."</li>
<li>When the player enters the "Castle" map, the player character automatically switches to "Princess."</li>
</ul>
<p><img src="/2709b732d5d65a079ea64dff8cd593c1/20241028-%E3%83%9E%E3%83%83%E3%83%97%E5%8D%98%E4%BD%8D%E3%81%A7%E6%93%8D%E4%BD%9C%E3%82%AD%E3%83%A3%E3%83%A9%E3%82%92%E5%A4%89%E6%9B%B4%E3%81%99%E3%82%8B.webp" alt="Demo of character switching when moving between maps"></p>
<p>This way, you can achieve per-map player character switching without using any event commands.</p>
<h2>Usage Notes</h2>
<ul>
<li>Character switching occurs automatically during map transitions (after a Transfer Player command, etc.) and when loading a save file.</li>
<li>The identifier in the map note tag <code>&#x3C;Player:identifier></code> must <strong>exactly match</strong> the "Character ID" set in the plugin parameters. It is case-sensitive. If the identifier is incorrect, the switch won't work.</li>
<li>The plugin performs a simple operation: it swaps the specified actor to the <strong>front</strong> of the party (moving them to the front if already in the party, or adding them and placing them at the front if not). It does not control the entire party composition or order. If you specify an actor not currently in the party, that actor will be added and become the leader. (The previous leader will be removed from the party.)</li>
<li>If you add or modify actors in the database, or change the "Character ID" in the plugin parameters, be sure to update the plugin parameter settings accordingly.</li>
<li>There is a possibility of conflicts with other plugins that modify party members. Please be cautious when using them together.</li>
</ul>
<p>:::ad</p>
<h2>Plugin Code (UM_MapActorSetting.js)</h2>
<p>Feel free to use this plugin code. Modifications are also welcome.</p>
<pre><code class="language-javascript">//=============================================================================
// UM_MapActorSetting
//=============================================================================

/*:
 * @plugindesc Automatically switches the player character based on map note tags.
 * @author UHIMA
 *
 * @param MapActorSettings
 * @text Actor Settings
 * @type struct&#x3C;ActorConfig>[]
 * @desc Settings for each actor
 *
 * @help
 * ============================================================================
 * Overview
 * ============================================================================
 * This plugin enables automatic player character switching based on
 * map note tags.
 *
 * ============================================================================
 * Usage
 * ============================================================================
 * Add the following note tag to maps where you want to switch the player character:
 *
 * &#x3C;Player:characterId>
 *
 * You can link database Actor IDs with character names in the plugin parameters.
 * This allows simple notation like &#x3C;Player:harold> to switch actors.
 * When adding new characters or changing Actor IDs, update this plugin's
 * mappings accordingly.
 *
 * Example:
 * &#x3C;Player:harold>
 */

/*~struct~ActorConfig:
 * @param Character ID
 * @text Character ID
 * @type string
 * @desc Identifier used in map note tags (e.g., harold)
 *
 * @param Actor
 * @text Actor
 * @type actor
 * @desc The actor to switch to
 */

(function () {
  var parameters = PluginManager.parameters("UM_MapActorSetting");
  var MapActorSettings = JSON.parse(parameters["MapActorSettings"] || "[]").map(
    (setting) => JSON.parse(setting)
  );

  var _Game_Player_performTransfer = Game_Player.prototype.performTransfer;
  Game_Player.prototype.performTransfer = function () {
    _Game_Player_performTransfer.call(this);
    switchActorByMap();
  };

  function switchActorByMap() {
    if ($dataMap &#x26;&#x26; $dataMap.meta.Player) {
      var characterId = $dataMap.meta.Player;
      var actorConfig = MapActorSettings.find(
        (config) => config["Character ID"] === characterId
      );

      if (actorConfig) {
        var newActorId = Number(actorConfig.Actor);
        var currentLeader = $gameParty.leader();
        var currentLeaderId = currentLeader ? currentLeader.actorId() : null;

        if (currentLeaderId !== newActorId) {
          if (currentLeaderId) {
            $gameParty.removeActor(currentLeaderId);
          }
          $gameParty.addActor(newActorId);
          $gamePlayer.refresh();
        }
      }
    }
  }

  var _Scene_Map_start = Scene_Map.prototype.start;
  Scene_Map.prototype.start = function () {
    _Scene_Map_start.call(this);
    switchActorByMap();
  };
})();
</code></pre>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/2709b732d5d65a079ea64dff8cd593c1/20241028-%E3%83%9E%E3%83%83%E3%83%97%E5%8D%98%E4%BD%8D%E3%81%A7%E6%93%8D%E4%BD%9C%E3%82%AD%E3%83%A3%E3%83%A9%E3%82%92%E5%A4%89%E6%9B%B4%E3%81%99%E3%82%8B.webp" medium="image"/></item><item><title><![CDATA[How to Create Switch-Activated Appearing Floor Tiles in RPG Maker MV/MZ]]></title><description><![CDATA[A detailed guide on how to create a floor-appearing gimmick in RPG Maker that becomes walkable when a specific switch is turned ON]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/rpgmaker-spawn-maptile/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/rpgmaker-spawn-maptile/</guid><category><![CDATA[rpgmaker]]></category><pubDate>Wed, 23 Oct 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="/7e5c0c95e30d10fefe73b0bee9422a3f/20241023-1-RPG%E3%83%84%E3%82%AF%E3%83%BC%E3%83%AB%E9%80%9A%E8%A1%8C%E5%BA%8A%E8%A1%A8%E7%A4%BA.webp" alt="Floor tiles appearing when a switch is turned ON in RPG Maker"></p>
<p>When creating games in <strong>RPG Maker MV</strong> or <strong>RPG Maker MZ</strong>, you'll often want to implement gimmicks where a path opens up after a certain condition is met. For example, after defeating a boss or solving a puzzle, <strong>floor tiles appear</strong> in a previously impassable area, allowing the player to proceed.</p>
<p>These "<strong>floors that appear when a switch is turned ON</strong>" are effective gimmicks that give players a sense of accomplishment and expand exploration possibilities.</p>
<p>This article explains how to use the <strong>Event</strong> system in RPG Maker MV/MZ to make floor tiles (platforms) appear and become passable when a specific switch is turned ON.</p>
<hr>
<p>:::ad</p>
<p><strong>In this article</strong></p>
<ol>
<li>Why Use "Events" to Create Floors? (Overview)</li>
<li>Editing the "Tileset" to Use Floor Tile Images in Events</li>
<li>Setting Up Floor Events That Appear and Become Passable via Switch</li>
<li>Summary: Create Dynamic Maps with Events</li>
</ol>
<hr>
<h2>Why Use "Events" to Create Floors? (Overview)</h2>
<p>In RPG Maker, basic terrain like ground and walls is created by placing "<strong>tiles</strong>" on the map. However, <strong>tiles placed on the map are fundamentally static</strong> -- you cannot toggle their visibility or remove them based on in-game switch or variable states.</p>
<p>Therefore, dynamic gimmicks like "floors that appear under certain conditions (switch ON)" need to be implemented using the "<strong>Event</strong>" system. With events, you can set appearance conditions, specify display images, and control passability settings.</p>
<p>In this article, we'll set up an example where, when the in-game <strong>switch "#0001_PassableFloor"</strong> is turned ON, floor tiles appear at designated locations and become passable.</p>
<p>:::ad</p>
<h2>Editing the "Tileset" to Use Floor Tile Images in Events</h2>
<p>First, you need to prepare so that the floor graphic you want to show can be set as an event's image.</p>
<p>Try creating a new event on the map and opening the image settings. You'll likely notice that while you can select images from [Tileset B] (walls, objects, etc.) and [Tileset C] (building roofs, etc.), <strong>the ground and floor tile images are not listed</strong>.</p>
<p><img src="/b596604e63ddeffca91d13df26168b85/20241023-2-%E3%82%BF%E3%82%A4%E3%83%AB%E3%82%BB%E3%83%83%E3%83%88%E5%88%9D%E6%9C%9F%E8%A8%AD%E5%AE%9A.png" alt="Default event image settings (no floor tiles available)"></p>
<p>This is because the tilesets available as event images are limited by default. To fix this, open the "<strong>Database</strong>" (the gear icon in the toolbar) and modify the settings in the "<strong>Tilesets</strong>" tab.</p>
<ol>
<li>
<p>Open the Database and select the "Tilesets" tab.</p>
</li>
<li>
<p>Select the tileset currently used by your map (e.g., 001: Field).</p>
</li>
<li>
<p>In the settings on the right, <strong>set the same image file that's assigned to the [A] tab's [A2] (ground tiles) to the [D] tab (or another available tab, e.g., [E])</strong>.</p>
</li>
</ol>
<p><img src="/e1e330aa6f231d72b7ce68679b8cfcbc/20241023-3-%E3%82%BF%E3%82%A4%E3%83%AB%E3%82%BB%E3%83%83%E3%83%88%E6%9B%B4%E6%96%B0.png" alt="Setting the same image from A2 to tab D in the tileset"></p>
<ol start="4">
<li>That's it. Click "Apply" or "OK" to close the Database.</li>
</ol>
<p>This adds [Tileset D] (or whichever tab you configured) to the event's image selection list, allowing you to choose floor tiles from there.</p>
<p><strong>[Important] Check the Passability Settings!</strong></p>
<p>In the tileset editing screen, select the floor tile image you just added and make sure to check the "<strong>Passability</strong>" setting on the left side.</p>
<ul>
<li><strong>O</strong>: Passable</li>
<li><strong>X</strong>: Impassable</li>
<li><strong>Star</strong>: Passable (displayed above the player)</li>
</ul>
<p>If the floor tile you want to appear is set to "<strong>X</strong>" (impassable), the player won't be able to walk on it even when the event displays the floor. Make sure it's set to "<strong>O</strong>" (or Star, depending on the situation). If it's set to X, click to change it to O.</p>
<p>:::ad</p>
<h2>Setting Up Floor Events That Appear and Become Passable via Switch</h2>
<p>Once the tileset is prepared, create the events that will make the floor appear. The setup is very straightforward.</p>
<ol>
<li>
<p>Right-click on the map tile where you want the floor to appear and select "Create Event."</p>
</li>
<li>
<p>When the Event Editor opens, leave <strong>Page 1</strong> (the initial state) <strong>with no image set and essentially in its default state</strong>. However, check/set the following two points:</p>
</li>
</ol>
<ul>
<li><strong>Options:</strong> Make sure the "Through" checkbox is <strong>unchecked</strong>. (If checked, the player could pass through even when there's no floor)</li>
<li><strong>Priority:</strong> Select "<strong>Below characters</strong>." (This allows the player to walk over the floor when it appears)</li>
</ul>
<p>(The trigger can stay at the default "Action Button." The execution content can also remain empty.)</p>
<p><img src="/f9f9340287b3519ec516995094ad1e88/20241023-5-%E3%82%A4%E3%83%99%E3%83%B3%E3%83%88%E8%A8%AD%E5%AE%9A.png" alt="Floor appearance event settings (Page 1: mostly default)"></p>
<ol start="3">
<li>
<p>Next, click the "<strong>New Event Page</strong>" button at the top of the Event Editor to create <strong>Page 2</strong>.</p>
</li>
<li>
<p>On <strong>Page 2</strong>, configure the state when the floor appears:</p>
</li>
</ol>
<ul>
<li><strong>Conditions:</strong> In the "Conditions" section on the left, check "<strong>Switch</strong>" and specify that the target switch (e.g., <strong>#0001_PassableFloor</strong>) is "<strong>ON</strong>."</li>
<li><strong>Image:</strong> Double-click the "Image" area in the lower left to open the tileset selection screen. Select the floor tile you made available earlier (e.g., from [Tileset D]) and click "OK."</li>
<li><strong>Options:</strong> Same as Page 1, leave "Through" <strong>unchecked</strong>.</li>
<li><strong>Priority:</strong> Same as Page 1, select "<strong>Below characters</strong>."</li>
</ul>
<p>(The trigger and execution content can also remain at their defaults for this page.)</p>
<p><img src="/0bfca8b8b293e46b1f106fb7a4e4527e/20241023-6-%E3%82%A4%E3%83%99%E3%83%B3%E3%83%88%E8%A8%AD%E5%AE%9A-2.png" alt="Floor appearance event settings (Page 2: floor image displayed when switch is ON)"></p>
<ol start="5">
<li>
<p>The event setup is complete. Click "OK" to close the Event Editor.</p>
</li>
<li>
<p>Copy the "appearing floor" event you created (Ctrl+C or Cmd+C) and paste it (Ctrl+V or Cmd+V) onto other tiles where you want the floor to appear.</p>
</li>
</ol>
<p><img src="/791cd291d26108d233d22fa5c6681e99/20241023-4-%E3%82%A4%E3%83%99%E3%83%B3%E3%83%88%E9%85%8D%E7%BD%AE.png" alt="Copying and placing floor appearance events on the map"></p>
<p>Test play the game and trigger the event that turns ON the specified switch (e.g., #0001_PassableFloor). Floor tiles should appear at the locations where you placed the events, and you should be able to walk on them.</p>
<p><img src="/7e5c0c95e30d10fefe73b0bee9422a3f/20241023-1-RPG%E3%83%84%E3%82%AF%E3%83%BC%E3%83%AB%E9%80%9A%E8%A1%8C%E5%BA%8A%E8%A1%A8%E7%A4%BA.webp" alt="Game screen showing floors that appeared and became passable after the switch was turned ON"></p>
<p>That completes the implementation of passable floors that appear via switch operation.</p>
<hr>
<p>:::ad</p>
<h2>Summary: Create Dynamic Maps with Events</h2>
<p>This article explained the basic method for implementing "<strong>floors that appear when a switch is turned ON</strong>" using the <strong>Event</strong> system in <strong>RPG Maker MV/MZ</strong>.</p>
<p>Here are the key points:</p>
<ul>
<li>Since tiles themselves cannot be dynamically changed, create the floor as an "Event."</li>
<li>To use floor tile images as event graphics, you need to modify the "Tileset" settings in the "Database" beforehand.</li>
<li>When editing the tileset, make sure the floor tile's "Passability" is set to "O (passable)."</li>
<li>The event should have a 2-page structure: Page 1 is blank (switch OFF state), and Page 2 has the appearance condition (switch ON), floor image, and options (<strong>Through OFF</strong>, <strong>Priority: Below characters</strong>).</li>
</ul>
<p>Using this technique, you can create more dynamic games where the map changes based on player actions -- like new paths opening after a boss fight, or hidden passages appearing when puzzles are solved. Try incorporating it into your own projects!</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/7e5c0c95e30d10fefe73b0bee9422a3f/20241023-1-RPG%E3%83%84%E3%82%AF%E3%83%BC%E3%83%AB%E9%80%9A%E8%A1%8C%E5%BA%8A%E8%A1%A8%E7%A4%BA.webp" medium="image"/></item><item><title><![CDATA[How to Implement a Charge Attack in Unity Using the Input System]]></title><description><![CDATA[Learn how to implement a charge attack in Unity using the Input System. A simple approach using started and canceled events with practical code examples]]></description><link>https://uhiyama-lab.com/en/blog/gamedev/unity-input-system-charge-attack/</link><guid isPermaLink="false">https://uhiyama-lab.com/en/blog/gamedev/unity-input-system-charge-attack/</guid><category><![CDATA[unity]]></category><pubDate>Fri, 05 Apr 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>This article explains how to implement a <strong>charge attack</strong> action -- similar to the spin attack from The Legend of Zelda -- using Unity's <strong>Input System</strong>.</p>
<p>While the Input System offers a "Hold Interaction" for detecting long presses, this guide takes a different approach. Instead, we'll use the basic button events <strong>"the moment a button is pressed (<code>started</code>)"</strong> and <strong>"the moment a button is released (<code>canceled</code>)"</strong> to measure the time difference between them.</p>
<p>Compared to the traditional approach of monitoring input every frame in the <code>Update</code> function, the Input System's event-driven approach results in cleaner code and better maintainability, especially when adding or modifying features.</p>
<hr>
<p><strong>In this article</strong></p>
<ol>
<li>Implementing Charge Attacks: Input System (Press) vs Update vs Hold Interaction</li>
<li>Installing the Input System and Creating an Input Actions Asset</li>
<li>Input Actions: Defining the Left Click (Attack Button) Action</li>
<li>Script Integration: PlayerInput and Event Handling Script</li>
<li>Charge Time Calculation Logic: Using started and canceled</li>
<li>Adding Mid-Charge Effects: Compensating for Input System Limitations with Coroutines</li>
<li>Summary: Pros, Cons, and When to Use the Press Event Approach</li>
</ol>
<hr>
<p>:::ad</p>
<h2>Implementing Charge Attacks: Input System (Press) vs Update vs Hold Interaction</h2>
<p>There are several ways to implement a charge attack. Let's look at the characteristics of each approach.</p>
<ul>
<li><strong>Traditional <code>Update</code> function approach:</strong>
<ul>
<li>Check input every frame in <code>Update()</code> and measure how long the button has been held.</li>
<li>The concept is intuitive, but the code tends to become complex as the number of input types and conditional branches increases.</li>
</ul>
</li>
<li><strong>Using Input System's <code>Hold Interaction</code>:</strong>
<ul>
<li>Set the Interaction to "Hold" in the Input Actions asset. After the button is held for a specified time (Hold Time), the <code>performed</code> event fires.</li>
<li>Easy to set up, but as we'll discuss later, implementing both "short press (normal attack)" and "long press (charge attack)" on the same button can become somewhat complex.</li>
</ul>
</li>
<li><strong>Using Input System <code>Press</code> events (<code>started</code>/<code>canceled</code>) (this article's approach):</strong>
<ul>
<li>Use the moment the button is pressed (<code>started</code>) and the moment it's released (<code>canceled</code>) to measure the time between them.</li>
<li>Event-driven code stays organized, and you can flexibly implement short press vs. long press detection and mid-charge processing.</li>
</ul>
</li>
</ul>
<h3>Q. Why use Press events (started/canceled) instead of Hold Interaction?</h3>
<p><code>Hold Interaction</code> is convenient for detecting that a button has been held for a specified duration. However, it has some drawbacks when you want to "use the same button for both short press (normal attack) and long press (charge attack)" or when you want to "add visual effects while charging."</p>
<ul>
<li><strong>Combining short and long presses:</strong> With Hold Interaction, if you want to execute a short press action (like a normal attack) immediately when the button is pressed -- before the Hold is completed and <code>performed</code> is called -- extra work is needed. For example, you might need to trigger a normal attack on <code>started</code> and cancel it if Hold completes, or set up a separate action for short presses, which complicates the implementation.</li>
<li><strong>Using the press start timing:</strong> If you want the character to immediately enter a ready stance or start a charge effect the moment the button is pressed, it can be difficult to control the timing with Hold Interaction alone.</li>
<li><strong>Mid-charge control:</strong> When displaying a charge gauge or implementing level-up effects at certain intervals during charging, Hold Interaction makes fine-grained control difficult since you have to wait for the final <code>performed</code> event.</li>
</ul>
<p>On the other hand, the <strong><code>Press</code> event (<code>started</code>/<code>canceled</code>) approach</strong> introduced in this article offers:</p>
<ul>
<li>Precise measurement of hold duration by calculating the time difference between the <code>started</code> event (button pressed) and the <code>canceled</code> event (button released).</li>
<li>Easy determination of "normal attack if held less than the threshold, charge attack if held longer" within the <code>canceled</code> event handler.</li>
<li>A natural flow: start the charge animation and effects on <code>started</code>, then trigger the attack on <code>canceled</code>.</li>
<li>Flexible mid-charge effects and state changes when combined with coroutines (discussed later).</li>
</ul>
<p>In summary, <strong>when you need flexible short press / long press detection and mid-charge effects and state management</strong>, the <code>started</code>/<code>canceled</code> event approach is well-suited.</p>
<hr>
<p>:::ad</p>
<h2>Installing the Input System and Creating an Input Actions Asset</h2>
<p>First, install the Input System package in your project and create an Input Actions asset to define your inputs.</p>
<h3>Installing the Input System Package</h3>
<p>This is done through the Unity Editor menu.</p>
<p><img src="/01f9ea2434798b9f69941dfc524c1870/20240405-001-InputSystem%E3%81%AE%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%BC%E3%83%AB%E6%96%B9%E6%B3%95.png" alt="Installing Input System from the Package Manager"></p>
<ol>
<li>Open <strong>Window > Package Manager</strong>.</li>
<li>Select "<strong>Packages: Unity Registry</strong>" from the dropdown in the upper left.</li>
<li>Find "<strong>Input System</strong>" in the list and click the "Install" button.</li>
<li>If a dialog prompts you to change project settings during installation, select "Yes" to restart the editor.</li>
</ol>
<h3>Creating an Input Actions Asset File</h3>
<p>Next, create an asset file to define input actions.</p>
<p><img src="/de1f989dd6e51c777f5fd5040a1e1dce/20240405-002-InputActions%E3%81%AE%E9%81%B8%E6%8A%9E.png" alt="Creating an Input Actions asset in the Project window"></p>
<ol>
<li>Right-click in the Project window and select <strong>Create > Input Actions</strong>.</li>
<li>Give the created asset file a descriptive name (e.g., <code>PlayerInputActions.inputactions</code>).</li>
<li>Select the created asset file, check "<strong>Generate C# Class</strong>" in the Inspector window, and click the "Apply" button.</li>
</ol>
<p><img src="/ef48f4829b7a3d51e834c1b07195bce1/20240405-003-InputActions%E3%81%A7%E4%BD%9C%E6%88%90%E3%81%95%E3%82%8C%E3%81%9F%E3%83%95%E3%82%A1%E3%82%A4%E3%83%AB.png" alt="Checking Generate C# Class generates a C# script"></p>
<p>Checking "Generate C# Class" auto-generates a C# class corresponding to this Input Actions asset, making it easier to handle input events from scripts.</p>
<hr>
<h2>Input Actions: Defining the Left Click (Attack Button) Action</h2>
<p>Double-click the Input Actions asset file (e.g., <code>PlayerInputActions.inputactions</code>) to open the editor window, and define the action used for the charge attack.</p>
<p><img src="/5b55d7d9bcc6411a98f16180c6e40494/20240405-004-InputAction%E3%81%AE%E7%99%BB%E9%8C%B2.png" alt="Setting up the attack action in the Input Actions editor"></p>
<ol>
<li>Click the "+" button in the <strong>Action Maps</strong> column to create a new Action Map (e.g., <code>Gameplay</code>). An Action Map is a group that organizes related actions.</li>
<li>Click the "+" button in the <strong>Actions</strong> column to create a new Action (e.g., <code>AttackLeft</code>). This corresponds to a specific input operation.</li>
<li>Select the <code>AttackLeft</code> Action and configure the following in the Properties panel on the right:
<ul>
<li><strong>Action Type:</strong> Select "<strong>Button</strong>." This is suitable for simple press/release inputs.</li>
</ul>
</li>
<li>Select <code>&#x3C;No Binding></code> under the <code>AttackLeft</code> Action and configure the following in the Properties panel. This binds the action to a specific device button:
<ul>
<li><strong>Path:</strong> Select "Mouse" > "<strong>Left Button</strong>" from the dropdown menu. (Gamepad buttons can also be configured here)</li>
</ul>
</li>
<li>When you're done editing, click the "<strong>Save Asset</strong>" button at the top of the window to save your changes.</li>
</ol>
<p>Now, "left mouse click" is registered as an action named <code>AttackLeft</code> that can be received as events from scripts.</p>
<hr>
<h2>Script Integration: PlayerInput and Event Handling Script</h2>
<p>To make the defined Input Actions work in your game, set up the script integration.</p>
<ol>
<li>
<p>Create an empty GameObject in the scene and give it a descriptive name (e.g., <code>InputManager</code>).</p>
</li>
<li>
<p>Add a "<strong>Player Input</strong>" component to the <code>InputManager</code> GameObject.</p>
</li>
<li>
<p>Drag and drop the Input Actions asset you created earlier (e.g., <code>PlayerInputActions.inputactions</code>) into the "<strong>Actions</strong>" field of the Player Input component.</p>
</li>
<li>
<p>Create the following C# script (e.g., <code>InputManager.cs</code>) and attach it to the <code>InputManager</code> GameObject.</p>
</li>
</ol>
<pre><code class="language-csharp">using UnityEngine;
using UnityEngine.InputSystem;

// Indicates that the PlayerInput component is required
[RequireComponent(typeof(PlayerInput))]
public class InputManager : MonoBehaviour
{
    // Variable to record the time when the button was first pressed
    private float buttonPressStartTime;
    // Threshold time for determining a charge attack (e.g., 1 second)
    private const float specialAttackThreshold = 1.0f;

    // Method called by the PlayerInput component
    // The method name should be "On" + the action name defined in Input Actions (e.g., AttackLeft),
    // or manually configured in the Inspector as described below
    public void OnAttackLeft(InputAction.CallbackContext context)
    {
        // Processing when the button is pressed (started)
        if (context.started)
        {
            Debug.Log("Attack button pressed (started)");
            // Record the press start time
            buttonPressStartTime = Time.time;
            // You can also add ready stance or charge effect start processing here
        }
        // Processing when the button is released (canceled)
        else if (context.canceled)
        {
            Debug.Log("Attack button released (canceled)");
            // Calculate how long the button was held
            float pressDuration = Time.time - buttonPressStartTime;
            Debug.Log($"Press duration: {pressDuration} seconds");

            // If the hold time exceeds the threshold, perform a charge attack
            if (pressDuration > specialAttackThreshold)
            {
                PerformSpecialAttack(); // Call charge attack method
            }
            // Otherwise, perform a normal attack
            else
            {
                PerformNormalAttack(); // Call normal attack method
            }
        }
        // context.performed is often called at nearly the same time as started for Button type
        // When not using Hold Interaction, primarily use started and canceled
    }

    // Perform a normal attack (placeholder implementation)
    private void PerformNormalAttack()
    {
        Debug.Log("Perform Normal Attack!");
        // Add actual normal attack logic here
    }

    // Perform a charge attack (placeholder implementation)
    private void PerformSpecialAttack()
    {
        Debug.Log("Perform Special Attack!");
        // Add actual charge attack logic here
    }
}
</code></pre>
<ol start="5">
<li>
<p>Select the <code>InputManager</code> GameObject and set the Player Input component's "<strong>Behavior</strong>" to "<strong>Invoke Unity Events</strong>" in the Inspector window.</p>
</li>
<li>
<p>Expand the "Events" section, open the Action Map name you configured (e.g., <code>Gameplay</code>), and click the "+" button next to the action name (e.g., <code>Attack Left</code>).</p>
</li>
<li>
<p>Drag and drop the <code>InputManager</code> GameObject itself into the event field, then select "InputManager" > "<strong>OnAttackLeft (InputAction.CallbackContext)</strong>" from the dropdown menu on the right. (If your script method is named <code>On[ActionName]</code>, it may be recognized automatically)</p>
</li>
</ol>
<p>Now, every time the left mouse button is clicked (pressed or released), the <code>OnAttackLeft</code> method in the <code>InputManager.cs</code> script will be called. This event-driven mechanism allows you to handle input without using the <code>Update</code> function.</p>
<hr>
<p>:::ad</p>
<h2>Charge Time Calculation Logic: Using started and canceled</h2>
<p>Let's take a closer look at the charge time calculation logic in the <code>OnAttackLeft</code> method of the <code>InputManager.cs</code> script discussed above.</p>
<ol>
<li><strong>Press start (<code>context.started</code>):</strong>
<ul>
<li>This block executes the moment the left mouse button is pressed.</li>
<li>The current time (<code>Time.time</code>) is recorded in the <code>buttonPressStartTime</code> variable. This becomes the starting point for measuring charge time.</li>
</ul>
</li>
<li><strong>Press end (<code>context.canceled</code>):</strong>
<ul>
<li>This block executes the moment the held left button is released.</li>
<li>The press duration (<code>pressDuration</code>) is calculated by subtracting the recorded press start time (<code>buttonPressStartTime</code>) from the current time (<code>Time.time</code>).</li>
<li>The calculated <code>pressDuration</code> is compared to the predefined charge attack threshold (<code>specialAttackThreshold</code>).</li>
<li>If <code>pressDuration</code> exceeds the threshold, <code>PerformSpecialAttack()</code> is called; otherwise, <code>PerformNormalAttack()</code> is called.</li>
</ul>
</li>
</ol>
<p>By using the Input System's <code>started</code> and <code>canceled</code> events this way, you can precisely measure how long a button was held and branch between normal and charge attacks accordingly. Since there's no need to add time every frame inside <code>Update()</code>, the code becomes simpler and may reduce processing overhead.</p>
<hr>
<h2>Adding Mid-Charge Effects: Compensating for Input System Limitations with Coroutines</h2>
<p>The Input System's event-driven model excels at capturing instantaneous events like "pressed" and "released," but it requires some extra work when you want to trigger processing at a <strong>specific timing while the button is being held</strong> (e.g., the moment a charge attack becomes available).</p>
<p>For instance, if you want effects like "make the character glow when the charge time reaches the threshold" or "play a charge complete sound effect," the <code>started</code> and <code>canceled</code> events alone cannot directly detect that "midpoint."</p>
<p>One common solution is to use Unity's <strong>Coroutines</strong>. Start a coroutine when the button is pressed (<code>started</code>), and after a set time (the charge attack threshold) has elapsed, send a charge complete signal.</p>
<p>Here's an example of the <code>InputManager.cs</code> with a coroutine added to implement a charge complete signal (using Debug log output here).</p>
<pre><code class="language-csharp">using UnityEngine;
using UnityEngine.InputSystem;
using System.Collections; // Required for coroutines

[RequireComponent(typeof(PlayerInput))]
public class InputManager : MonoBehaviour
{
    private float buttonPressStartTime;
    private const float specialAttackThreshold = 1.0f;
    // Variable to hold the running coroutine
    private Coroutine chargeCheckCoroutine;
    // Flag indicating whether the charge complete signal has been sent
    private bool isChargeComplete = false;

    public void OnAttackLeft(InputAction.CallbackContext context)
    {
        if (context.started)
        {
            Debug.Log("Attack button pressed (started)");
            buttonPressStartTime = Time.time;
            isChargeComplete = false; // Reset flag when charging starts

            // Stop any existing coroutine (handles rapid button presses)
            if (chargeCheckCoroutine != null)
            {
                StopCoroutine(chargeCheckCoroutine);
            }
            // Start the charge timer coroutine
            chargeCheckCoroutine = StartCoroutine(ChargeTimerCoroutine());
        }
        else if (context.canceled)
        {
            Debug.Log("Attack button released (canceled)");
            // Stop the charge timer coroutine when the button is released
            if (chargeCheckCoroutine != null)
            {
                StopCoroutine(chargeCheckCoroutine);
                chargeCheckCoroutine = null; // Clear the coroutine reference
            }

            float pressDuration = Time.time - buttonPressStartTime;
            Debug.Log($"Press duration: {pressDuration} seconds");

            // If the charge complete flag is set (= threshold exceeded), perform charge attack
            if (isChargeComplete) // You could also use pressDuration > specialAttackThreshold
            {
                PerformSpecialAttack();
            }
            else
            {
                PerformNormalAttack();
            }

            // Reset the flag after executing the attack
            isChargeComplete = false;
        }
    }

    // Coroutine that monitors charge time
    private IEnumerator ChargeTimerCoroutine()
    {
        // Wait for the threshold duration
        yield return new WaitForSeconds(specialAttackThreshold);

        // When the threshold is reached (and the button is presumably still held)
        // Set the isChargeComplete flag and send the charge complete signal
        // Note: checking context.ReadValue&#x3C;float>() > 0 would be more precise
        Debug.Log("Charge Complete threshold reached!");
        isChargeComplete = true;

        // Trigger charge complete effects here (glow, sound effect, etc.)
        TriggerChargeCompleteEffect();

        chargeCheckCoroutine = null; // Coroutine finished
    }

    private void PerformNormalAttack()
    {
        Debug.Log("Perform Normal Attack!");
        // Normal attack logic
    }

    private void PerformSpecialAttack()
    {
        Debug.Log("Perform Special Attack!");
        // Charge attack logic
    }

    private void TriggerChargeCompleteEffect()
    {
         Debug.Log("Play Charge Complete Effect!");
        // Charge complete effect processing (display effects, play sound, etc.)
    }
}
</code></pre>
<p>In this code, when the button is pressed, <code>ChargeTimerCoroutine</code> starts and after <code>specialAttackThreshold</code> seconds, sets the <code>isChargeComplete</code> flag to <code>true</code> and calls the <code>TriggerChargeCompleteEffect()</code> method. When the button is released (<code>canceled</code>), the code checks whether this flag is <code>true</code> to determine whether to execute a charge attack or a normal attack.</p>
<p>By combining coroutines this way, you can leverage the benefits of the Input System's event-driven model while also handling mid-charge processing at intermediate timings. While it may seem more complex than implementing it in <code>Update</code>, as the number of input types grows or coordination with other actions becomes necessary, the Input System's Action Maps and event-based separation of concerns help keep your overall code organized.</p>
<hr>
<p>:::ad</p>
<h2>Summary: Pros, Cons, and When to Use the Press Event Approach</h2>
<p>We've seen how using Unity's <strong>Input System</strong> <strong>Press events (<code>started</code> / <code>canceled</code>)</strong> enables an event-driven implementation of charge attacks.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>No need to use the <code>Update</code> function; code is organized on a per-event basis.</li>
<li>Precise capture of press start and end timings.</li>
<li>Relatively easy to distinguish between short and long presses and branch processing accordingly.</li>
<li>Input mapping management through Input Actions assets is intuitive, with high extensibility for key configuration and similar features.</li>
</ul>
<p><strong>Cons (considerations):</strong></p>
<ul>
<li>Processing at "specific timings while the button is held" (like charge complete effects) requires complementary mechanisms such as coroutines.</li>
<li>For simple charge time measurement alone, implementing in the <code>Update</code> function may feel simpler in some cases.</li>
</ul>
<p><strong>Which approach should you choose?</strong></p>
<p>The best implementation method depends on your project's scale, requirements, and team preferences.</p>
<ul>
<li>For <strong>small projects or prototypes</strong> with few inputs beyond the charge attack, a simple <code>Update</code> function implementation may be sufficient.</li>
<li>For <strong>medium to large projects</strong> that prioritize diverse inputs (gamepad support, key configuration, etc.), coordination with other actions, and future extensibility and maintainability, <strong>combining the Input System's Press events with coroutines</strong> is a strong option. Hold Interaction is also an option, but the Press event approach has advantages when you need the flexibility described in this article.</li>
</ul>
<p>The Input System has a learning curve, but once you're comfortable with it, it becomes a powerful tool. Give it a try and implement charge attacks in whatever way best suits your project.</p>]]></content:encoded><media:content url="https://uhiyama-lab.com/static/01f9ea2434798b9f69941dfc524c1870/20240405-001-InputSystem%E3%81%AE%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%BC%E3%83%AB%E6%96%B9%E6%B3%95.png" medium="image"/></item></channel></rss>