[EDIT: improved the graphics and code] If you call $S$ the sigmoid function (left), and $Q_{\text{u}}$ the uniform quantization operator, the right plot is obtained by:
$$Q_{\text{nu}} =S^{-1}(Q_{\text{u}}(S(\cdot)))$$
as illustrated in

from this basic Matlab code:
time = linspace(-1,1,1000);
Q=4; % Number of bits (almost)
Qu = round(time*2^Q)/2^Q; % Uniform quantization
%%% Choice of companding/expanding
%% Square-root
S = @(x) sign(x).*sqrt(abs(x));
Sinv = @(x) sign(x).*(x.^2);
%% Mu-law
mu = 2^5-1;
S = @(x) sign(x).*log(1+mu*abs(x))/log(1+mu);
Sinv = @(x) sign(x).*((1+mu).^abs(x)-1)/mu;
Qnu = sign(time).*Sinv(round(S(abs(time))*2^Q)/2^Q); % Non-uniform quantization
subplot(2,2,1)
plot(time,S(time));
xlabel('Sigmoid function')
subplot(2,2,2)
plot(time,Qu);
xlabel('Uniform quantization')
subplot(2,2,3)
plot(time,Sinv(time));
xlabel('Inverse sigmoid function')
subplot(2,2,4)
plot(time,Qnu);
xlabel('Non-uniform quantization')
The main idea is to keep a quantization step "almost proportional" to the input. So that the relative quantization error $(x-x_Q)/x$ does not vary too much across the signal. You can plot the relative error diagram too. So your options are:
- use a non-uniform quantizer (right plot): more precise, less easy to implement
- use a shaping function (left plot), that acts like a variance-stabilizing transform (short discussion) like Anscombe or Box-Cox: a sigmoid, a square toot, a logarithm (like the A-law and $\mu$-law), and then apply a uniform quantizer, and when needed, apply the inverse of the shaping function
The latter is sometimes called companding (or compansion), a merger of compressing and expanding It is also related to dynamic range compression. While such designs have been largely heuristic, I would like to mention the recent paper Scalar Quantization for Relative Error, Data compression conference, 2011:
Quantizers for probabilistic sources are usually optimized for
mean-squared error. In many applications, maintaining low relative
error is a more suitable objective. This measure has previously been
heuristically connected with the use of logarithmic companding in
perceptual coding. We derive optimal companding quantizers for fixed
rate and variable rate under high-resolution assumptions. The analysis
shows logarithmic companding is optimal for variable-rate quantization
but generally not for fixed-rate quantization. Naturally, the
improvement in relative error from using a correctly optimized
quantizer can be arbitrarily large. We extend this framework for a
large class of nondifference distortions.