You seem to wish TeX to analyze the entire .tex-source code/.tex-input-file and doing some (orthographic) adjustments before actually processing it the usual way.
This is not how traditional TeX-engines are designed to work.
Using Knuth's analogy of TeX being a beast with eyes and a digestive tract you can simplify things a bit and say that TeX is designed to work as follows:
TeX's eyes look at .tex-input-files line by line and TeX places characters (not tokens yet!) corresponding to those seen in a line of .tex-input into its mouth.
So TeX's mouth receives sequences of characters more or less corresponding to those seen when "looking" at a line of .tex-input.
TeX's mouth "chews" these sequences of characters into smaller bits, so-called tokens (character-tokens/control-sequence-tokens). In other words:
TeX's mouth takes the single characters as sequences of directives for producing so-called tokens (character-tokens/control-sequence-tokens).
The mouth produces the tokens and sends them down the gullet.
Expandable tokens, e.g., macro-tokens, are expanded while going through the gullet. (Unless the gullet was triggered to suppress expansion, which is the case, e.g., with tokens forming the definition-text of a macro defined in terms of \def. LaTeX's \newcommand is a macro which is based on \def.) The gullet is the place of expansion.
After going through the gullet tokens reach the stomach where executable tokens, e.g., things like \def, are executed/where assignments are carried out, boxes are build, paragraphs are broken into lines, lines are broken across pages, and the .pdf-output-file is created.
Tailored to your question the gist of all this (over?)simplification of traditional TeX engines' way of working is:
TeX's concepts of TeX's eyes line-wise processing files of .tex-input and TeX's mouth character-wise-processing the single lines of .tex-input actually contradicts the idea of having TeX "look" at an entire .tex-input-file as a whole or at an entire line of .tex-input as a whole for doing whatsoever adjustments (e.g., replacements or the like) before "digesting" things the usual way.
You probably can use a LuaTeX-based engine and have it pre-preprocess things by Lua-features (whereby replacing could take place) before having them passed to TeX's -eh- usual digestion-mechanisms. (This is the approach exhibited in egreg's answer.) Alternatively you probably can have TeX read the entire .tex-input-file in verbatim-catcode-régime and have it do the replacement to the resulting set of tokens (e.g., via xstring's \StrSubstitute or by the routines provided by expl3) and then switch back to normal catcode-régime and pass the result to \scantokens. But this answer of mine to an older question which is also about replacing things in a document provides a short survey on some scenarios where a simple search-replace (no matter how it is done) of phrases occurring in the .tex-source code might not be sufficient.
How about introducing an \if-switch for forking the orthographic style and having TeX define macros for each word, depending on that \if-switch?
A problem with such an approach might be that after tokenizing a control-word-token TeX usually ignores spaces occurring in the .tex-input. There are situations where this doesn't matter, e.g., at the end of a sentence or subordinate clause, when the word in question is trailed by some punctuation-mark or comma. There are also situations where a trailing space should not be ignored—e.g., when more words follow. You could use the xspace-package. In a very high percentage of cases its command \xspace makes the right decision regarding insertion of a space behind a word. In the example below I chose another approach: Each control-word-token denoting a word whose orthography may be changed must be trailed by a slash / which gets gobbled. Slashes being tokenized as character-tokens implies that spaces after slashes will not be ignored. However, the approach of defining a macro for each word whose orthography shall be changeable does not take into account scenarios like things occurring inside a verbatim-environment or occurring in arguments of \verb-commands and the like, i.e., scenarios where things are not expanded but processed verbatim. Also you need to take care that with things like \uppercase/\lowercase the macros are expanded before the case-changing-routine is carried out. LaTeX's \MakeUppercase/\MakeLowercase do this for you.
The way I defined things in the example below also lets you change/switch orthography within the document/for parts of the document only. If you do this, take care with so-called moving-arguments (i.e. with arguments that wind up in several places of the document, e.g., section-titles that wind up in the main text, in the table of contents, and probably in page-headers, and probably in bookmarks of .pdf-files) to obtain the right orthographic variants in the right places of the document/of the .pdf-output-file.
\documentclass{article}
\newif\ifoldorthography
\makeatletter
@ifdefinable\DefineOrthographyDependantCommand{%
\DeclareRobustCommand\DefineOrthographyDependantCommand[3]{%
@ifdefinable#1{%
\def#1/{\ifoldorthography\expandafter@secondoftwo\else\expandafter@firstoftwo\fi{#2}{#3}}%
}%
}%
}%
\makeatother
\DefineOrthographyDependantCommand\dais{dais}{daïs}%
\DefineOrthographyDependantCommand\cooperate{cooperate}{coöperate}%
\DefineOrthographyDependantCommand\anasthesia{anasthesia}{anasthæsia}%
\DefineOrthographyDependantCommand\hotel{hotel}{hôtel}
\begin{document}
\oldorthographytrue
old orthography:
\dais/ \cooperate/ \anasthesia/ \hotel/
\dais/, \cooperate/, \anasthesia/, \hotel/.
\ifoldorthography
\begin{verbatim}
daïs
coöperate
anasthæsia
hôtel
\end{verbatim}
\else
\begin{verbatim}
dais
cooperate
anasthesia
hotel
\end{verbatim}
\fi
\bigskip\hrule\bigskip
\oldorthographyfalse
current orthography:
\dais/ \cooperate/ \anasthesia/ \hotel/
\dais/, \cooperate/, \anasthesia/, \hotel/.
\ifoldorthography
\begin{verbatim}
daïs
coöperate
anasthæsia
hôtel
\end{verbatim}
\else
\begin{verbatim}
dais
cooperate
anasthesia
hotel
\end{verbatim}
\fi
\end{document}

\addtosubstitutions{standardize}{ſtandardize}and then try to use a control sequence\standardizeit doesn't work. Is there a way around this? – generallyconfuzzled May 06 '21 at 12:22\newcommand{\standardize}{whatever}, do\addtosubstitutions{\\standardize}{\standardize}and, later,\addtosubstitutions{standardize}{ſtandardize}But the best is to avoid this. – egreg May 06 '21 at 14:46