Replacing Day One - need shortcut

I am not good at creating shortcuts… Does anyone know where I can get a shortcut that when run will show my “on this day in History” from Bear? (I would like to see what I journaled in past years “on this day.”)

1 Like

This is the shortcut I use - Shortcuts

Basically it opens Bear and pulls up all the notes that have today’s date (I formate dates using YYYY-MM-DD formatting).

I recently transitioned away from DayOne after using it for 14 years to doing all my journaling in Bear notes. I’ve developed a nice little routine in Bear and was even able to get all my DayOne entries over into Bear using a python script. Very happy with the results so far and much prefer journaling in Bear since I am already using it for writing and work notes.

1 Like

Same here, but I’m not a techy person. So I have to rename 12 years of entries to YYYY-MM-DD.

I also had to do that when importing from DayOne to Bear. I will preface this by saying I am not a techy person either, but I had Gemini write me a Python script and then give me step-by-step instructions for how to execute the Python script. It was basically like following a recipe and it worked great!

It did take a couple of tries to get the Python script correct. This is what I ended up using that worked. This script also merged any entries that occurred on the same day with timestamps.

The only issue I ran into was audio files, which did not import. Thankfully, I didn’t have many of those (only like 30) so I manually imported them.


import json
import datetime
from collections import defaultdict

# --- CONFIGURATION ---
INPUT_FILE = 'Journal.json'
OUTPUT_FILE = 'Merged_Journal_All_Media.json'

def parse_iso_date(iso_string):
    """Parses DayOne ISO timestamp into a datetime object."""
    try:
        clean_iso = iso_string.replace('Z', '+00:00')
        return datetime.datetime.fromisoformat(clean_iso)
    except ValueError:
        return datetime.datetime.now()

def get_date_key(iso_string):
    """Extracts YYYY-MM-DD string."""
    return iso_string.split('T')[0]

def format_time_display(dt_object):
    """Formats datetime object to 12-hour time (e.g., 02:30 PM)."""
    return dt_object.strftime("%I:%M %p")

def main():
    try:
        with open(INPUT_FILE, 'r', encoding='utf-8') as f:
            data = json.load(f)
    except FileNotFoundError:
        print(f"❌ Error: Could not find '{INPUT_FILE}'.")
        return

    entries = data.get('entries', [])
    print(f"📂 Loaded {len(entries)} entries. Processing...")

    grouped_entries = defaultdict(list)
    for entry in entries:
        date_key = get_date_key(entry['creationDate'])
        grouped_entries[date_key].append(entry)

    new_entries = []
    
    # List of attachment keys Day One might use
    attachment_keys = ['photos', 'audios', 'pdfs', 'videos']

    for date_key, daily_batch in sorted(grouped_entries.items()):
        # Sort chronologically
        daily_batch.sort(key=lambda x: x['creationDate'])
        
        first_entry = daily_batch[0]
        
        merged_entry = {
            'uuid': first_entry['uuid'], 
            'creationDate': first_entry['creationDate'],
            'location': first_entry.get('location'),
            'weather': first_entry.get('weather'),
            'tags': set(),
            # Initialize empty lists for all possible attachment types
            'photos': [],
            'audios': [],
            'pdfs': [],
            'videos': []
        }

        text_blocks = []

        for item in daily_batch:
            # 1. Handle Text
            raw_text = item.get('text')
            if raw_text is None: raw_text = ""
            raw_text = raw_text.strip()
            
            # Fallback to richText if text is empty
            if not raw_text:
                raw_text = item.get('richText', '')
                if raw_text is None: raw_text = ""
                raw_text = raw_text.strip()

            if raw_text:
                dt = parse_iso_date(item['creationDate'])
                time_str = format_time_display(dt)
                block = f"**{time_str}**\n{raw_text}"
                text_blocks.append(block)

            # 2. Handle ALL Attachment Types
            for key in attachment_keys:
                if key in item and isinstance(item[key], list):
                    merged_entry[key].extend(item[key])
            
            # 3. Handle Tags
            if 'tags' in item:
                for tag in item['tags']:
                    merged_entry['tags'].add(tag)

        # Build final markdown
        full_body = "\n\n---\n\n".join(text_blocks)
        final_text = f"# {date_key}\n\n{full_body}"

        merged_entry['text'] = final_text
        merged_entry['tags'] = list(merged_entry['tags'])
        
        # Clean up empty fields (so we don't have empty "audios": [] in JSON)
        for key in attachment_keys + ['location', 'weather']:
            if not merged_entry[key]:
                del merged_entry[key]

        new_entries.append(merged_entry)

    output_data = data.copy()
    output_data['entries'] = new_entries

    with open(OUTPUT_FILE, 'w', encoding='utf-8') as f:
        json.dump(output_data, f, indent=2, ensure_ascii=False)

    print(f"✅ Success! Merged {len(entries)} entries.")
    print(f"💾 Saved as '{OUTPUT_FILE}'.")

if __name__ == "__main__":
    main()

2 Likes

Thank you! It’s kind of you to share it :slight_smile:

Thank you so much for sharing!