メインコンテンツまでスキップ

「blog」タグの記事が4件件あります

全てのタグを見る

Time

· 約1分
Nick Lange
Someone at 5L Labs

The below is an evolving thought process on how to deal with information aging out in machine learning models.

Information aging out for learning (human and machine) requires thinking about the problem space from two different angles: 1.) Embeddings 2.) Weights / Models

Of which there are three scenarios: 1.) Explicitly dated information 2.) Implicity dated information 3.) Undated information

Of sources across a different set of vectors: a.) Mostly Trusted b.) Untrusted

from multiple sources including, but not limited to:

  1. Books (our oldest form of information) - Permanent form of information
  2. Articles (news, blogs, journals) - Semipermanent form of information
  3. Social Media (the most ephemeral form of information)

In addition, we need to consider weather the model or human is aiming for general or deep knowledge of the topic at hand.

Deep knowledge may have less stickiness over time, while general knowledge may be more resilient to time decay.

So where to go from here?

  1. Step 1 - Call a problem that I don't think others are looking at yet
  2. Step 2 - Do nothign for six months
  3. Step 3 - Wait for some other person to solve it
  4. Step 4 - Profit!!!

Entirely Private Machine Learning Home Setup

· 約4分
Nick Lange
Someone at 5L Labs
警告

Background

Caveat Emptor: This is a public blog of what probably should be a private note (quality wise).

Machine Specification:

  • CPU: Mac M2 Pro
  • RAM: 96GB DDR4
  • Storage: 1TB NVMe SSD

Tonight turned into an unexpected journey through the rabbit holes of self-hosted technologies. I decided to venture out of my comfort zone, and boy, was it an enlightening experience! Here's a rundown of gotchas:

OE