25

(Since this post was met with a certain reluctance to be given answers, a version of it is also posted in Community.)

Please share neural networks diagrams you have made in Mathematica / WL. Here is an example:

ClearAll[a];
ns = {3, 4, 6, 4, 1};
nodes = MapIndexed[Function[{n, i}, Prepend[#, i[[1]]] & /@ Array[a, {n}]], ns];
edges = Map[Outer[Rule, #[[1]], #[[2]]] &, Partition[nodes, 2, 1]];
colors = Map[# -> ColorData[11, "ColorList"][[#[[1]]]] &, Flatten[nodes]];
Graph[Flatten[edges], VertexSize -> 0.3, VertexStyle -> colors]

enter image description here

But, here are some more examples.

Update

Addressing the concerns in the comments...

The question is intentionally given with a short explanation and a link to examples. I wanted to gather some pictures of neural networks made with Mathematica for a few presentations about deep learning. (Like this Mathematica-vs-R over deep learning presentation.) I was somewhat surprised that such images were not easy to find.

What I am interested in are images like these:

enter image description here

enter image description here

enter image description here

Anton Antonov
  • 37,787
  • 3
  • 100
  • 178
  • 5
    "Share examples" is not a question. why don't you simply replace the node numbers in your own code? – David G. Stork Jun 20 '18 at 22:10
  • 2
    @DavidG.Stork " 'Share examples' is not a question. why don't you simply replace the node numbers in your own code?" -- Please take a look at Vitaliy Kaurov's answer. Its submission addresses the first part of your comment; its content the second part. – Anton Antonov Jun 21 '18 at 01:49
  • 3
    To the "close" voters: 1. How come if this question is "too broad" there are two specific answers already? 2. If this question is "primarily opinion based", how come we recognize an image of a neural network when we see it? Did you look at the images in the provided link? – Anton Antonov Jun 21 '18 at 01:53
  • I didn't vote to close, but this question is FAR too broad. What if an artist "asked": "please make a drawing"? Your question gives us no hint at WHY you want "examples." Is it to highlight links? To enable massively large networks to be easily understood? Is it to show the nonlinearities in each node? Is it to show the weight values? A question should be able to be accepted but with your vague and broad "question," no matter what "answer" you "accept," there will be an infinity of other "answers," many even "better." This is no way to ask a question. (Now I'm voting to close.) – David G. Stork Jun 21 '18 at 04:09
  • I do not want to continue arguing with David G. Stork but his previous comment is too prominently "featured" in this question and I find it only mostly right. 1. "[...] this question is FAR too broad[...]" -- It is not that broad; images of neural networks are easily recognized (and apparently some people can relatively easily do them.) I did not want to influence the answers with more specific question formulations. (cont.) – Anton Antonov Jun 29 '18 at 13:12
  • (cont.) 2. "A question should be able to be accepted[...]" -- This is a good rule for most of the questions posted and to be posted at MSE. There are notable exceptions, like this one, and I can come up easily with at least 5 more. 3. "This is no way to ask a question." -- Apparently it is. 4. "(Now I'm voting to close.)" -- I appreciate being upfront, but I am not sure the close-voting was done with having in mind what is best for MSE's users. – Anton Antonov Jun 29 '18 at 13:12
  • asks: "To the "close" voters: 1. How come if this question is "too broad" there are two specific answers already?" Simple. Just ask the broadest possible question you can imagine, e.g., "What might be a good book to read?" and you can get a specific answer ("Bad Blood"). The fact that you get a specific answer implies nothing whatsoever as to whether the question was "too broad." Examples: 1) What's a good restaurant in China? 2) How would you improve Renaissance art? 3) How can you draw a world map differently? (Like the OP) "I am interested are images like these?" Why? – David G. Stork Jun 29 '18 at 19:10
  • 1
    @DavidG.Stork Your last comment is just an expansion of a previous argument you did. (To which I gave a response.) Do you have anything new to add? Also, why do you continue posting here? Do you still hope that this question is going be closed or neglected because of your comments or votes? – Anton Antonov Jun 29 '18 at 19:46
  • Closed. Your response or "answers" consist of unsupported assertions, such as "Apparently it is" (because you did it and so you think it must be right). I've rebutted all such non-answer "answers" with actual reasons and examples. – David G. Stork Jun 29 '18 at 20:14
  • @DavidG.Stork 1. "I've rebutted all such non-answer "answers" with actual reasons and examples." -- No, you have not rebuffed anything. Even if we ignore my arguments in the comments above, the existence of the answers posted below is a very good contra-argument to your objections. 2. "Closed" -- You still want this question closed!? Do you think that closing this question would make MSE more useful? Do you think that removing this question and its answers would make MSE more useful? – Anton Antonov Jun 29 '18 at 20:48
  • "the existence of the answers... is a very good contra-argument to your objections." No... as my vague "book" questions with specific answers demonstrates. And yes: I think that closing this would make MSE more useful and focus attention on specific Qs. By the way, I wrote one of the world's leading textbooks on neural networks (nearing 100k sales, 4 languages, used in every leading university, ...) and wrestled how to best present networks, including deep ones. I confronted SPECIFIC, detailed problems in doing so... and never asked "please draw me one" as it wouldn't help. Over/out. – David G. Stork Jun 29 '18 at 21:23
  • 2
    @DavidG.Stork 1. "And yes: I think that closing this would make MSE more useful and focus attention on specific Qs." -- It seems that a fair amount of people disagree with you. 2. "I wrote one of the world's leading textbooks on neural networks (nearing 100k sales, 4 languages[...]" -- Great! So, because of your Neural Networks expertise this question got a disproportionate amount of your attention. – Anton Antonov Jun 29 '18 at 22:18

5 Answers5

23

Below is given a function definition that can be used to make a neural network plot with formulae and activation functions graphics. The code/plot can be garnished some more, but at this point I find it good enough...

Clear[FormulaNeuralNetworkGraph]
FormulaNeuralNetworkGraph[layerCounts : {_Integer, _Integer, _Integer}] :=      
  Block[{gr1, gr2, gr3, gr4, gr, bc},
   gr1 = IndexGraph[CompleteGraph[Take[layerCounts, 2]]];
   gr2 = Graph[Map[(layerCounts[[1]] + #) \[UndirectedEdge] (layerCounts[[1]] + layerCounts[[2]] + #) &, Range[layerCounts[[2]]]]];
   gr3 = IndexGraph[CompleteGraph[Take[layerCounts, -2]], layerCounts[[1]] + layerCounts[[2]] + 1];
   bc = layerCounts[[1]] + 2*layerCounts[[2]];
   gr4 = Graph[Map[(bc + #) \[UndirectedEdge] (bc + layerCounts[[3]] + #) &, Range[layerCounts[[3]]]], VertexLabels -> "Name"];
   gr = GraphUnion[gr1, gr2, gr3, gr4];
   Graph[gr, 
    GraphLayout -> {"MultipartiteEmbedding", 
      "VertexPartition" -> {layerCounts[[1]], layerCounts[[2]], 
        layerCounts[[2]], layerCounts[[3]], layerCounts[[3]]}}]
   ];

Clear[FormulaNeuralNetworkGraphPlot] Options[FormulaNeuralNetworkGraphPlot] = Options[Graphics];

FormulaNeuralNetworkGraphPlot[layerCounts : {Integer, _Integer, _Integer}, func1, opts : OptionsPattern[]] :=
FormulaNeuralNetworkGraphPlot[layerCounts, func1, # &, opts];

FormulaNeuralNetworkGraphPlot[ layerCounts : {Integer, _Integer, _Integer}, func1, func2_, opts : OptionsPattern[]] :=
Block[{plOpts, grFunc1, grFunc2, gr, vNames, vCoords, vNameToCoordsRules, edgeLines}, plOpts = {PlotTheme -> "Default", Axes -> True, Ticks -> False, Frame -> True, FrameTicks -> False, ImageSize -> Small}; grFunc1 = Plot[func1[x], {x, -2, 2}, Evaluate[plOpts]]; grFunc2 = Plot[func2[x], {x, -2, 2}, Evaluate[plOpts]];

gr = FormulaNeuralNetworkGraph[layerCounts]; vNames = VertexList[gr]; vCoords = VertexCoordinates /. AbsoluteOptions[gr, VertexCoordinates]; vNameToCoordsRules = Thread[vNames -> vCoords]; edgeLines = Arrow@ReplaceAll[List @@@ EdgeList[gr], vNameToCoordsRules];

Graphics[{ Arrowheads[0.02], GrayLevel[0.2], edgeLines,

 EdgeForm[Black], FaceForm[Gray], 
 Map[Disk[#, 0.04] &, vCoords[[1 ;; -layerCounts[[-1]] - 1]]],

 Black,
 Map[{EdgeForm[Gray], FaceForm[White], Disk[#, 0.14], 
    Text[Style["\[Sum]", 16, Bold], #]} &,
  Join[
   vCoords[[layerCounts[[1]] + 1 ;; layerCounts[[1]] + layerCounts[[2]]]],
   vCoords[[-2 layerCounts[[-1]] ;; -layerCounts[[-1]] - 1]]
   ]],

 Map[{EdgeForm[None], FaceForm[White], 
    Rectangle[# - {0.2, 0.15}, # + {0.2, 0.15}], 
    Inset[grFunc1, #1, Center, 0.4]} &, 
  vCoords[[ Total[layerCounts[[1 ;; 2]]] + 1 ;; Total[layerCounts[[1 ;; 2]]] + layerCounts[[2]]] ]],

 Map[{EdgeForm[None], FaceForm[White], 
    Rectangle[# - {0.2, 0.15}, # + {0.2, 0.15}], 
    Inset[grFunc2, #1, Center, 0.4]} &, 
  MapThread[Mean@*List, {vCoords[[-2 layerCounts[[-1]] ;; -layerCounts[[-1]] - 1]], vCoords[[-layerCounts[[-1]] ;; -1]]}]]}, 
opts]

];

Note that the function FormulaNeuralNetworkGraphPlot takes the options of Graphics.

 FormulaNeuralNetworkGraphPlot[{5, 9, 6}, Tanh, #^3 &,  ImageSize -> 500]

enter image description here

(I tried to reuse as much as I can the code from the answer of Szabolcs. I had to move to using Graphics because I had hard time insetting the activation functions plots using the multi-partite graph options.)

Anton Antonov
  • 37,787
  • 3
  • 100
  • 178
21

The key is GraphLayout -> "MultipartiteEmbedding".

layerCounts = {5, 3, 3, 8, 2};

graph = GraphUnion @@ MapThread[
    IndexGraph,
    {CompleteGraph /@ Partition[layerCounts, 2, 1], 
     FoldList[Plus, 0, layerCounts[[;; -3]]]}
    ];

vstyle = Catenate[
  Thread /@ Thread[
    TakeList[VertexList[graph], layerCounts] -> ColorData[97] /@ Range@Length[layerCounts]
    ]
  ]

Mathematica graphics

graph = Graph[
  graph,
  GraphLayout -> {"MultipartiteEmbedding", "VertexPartition" -> layerCounts},
  GraphStyle -> "BasicBlack",
  VertexSize -> 0.5,
  VertexStyle -> vstyle
  ]

Mathematica graphics

This won't work for only two layers because GraphUnion is being unreasonable when given a single argument. You can complain to WRI support about that, or implement a workaround.

Szabolcs
  • 234,956
  • 30
  • 623
  • 1,263
  • 7
    Brilliant. "You can complain to WRI support about that, or implement a workaround." I might do both! By the way, at EWTC-2018 last week, during the "Quiz the Experts: Q&A", somebody did ask about graph functionalities development and support with more-or-less similar concerns as those you voiced in Community. – Anton Antonov Jun 21 '18 at 16:32
17

enter image description here

A bit different function that always places vertices symmetrically:

LayersGraph[layers_]:=
Module[{
    uni=Table[Unique[],#]&/@layers,
    coor=Flatten[Table[{k,#}&/@(Range[#]-Mean[Range[#]]&/@layers)[[k]],{k,Length[layers]}],1]},
Graph[
    Flatten[uni],
    Flatten[Outer[Rule,#1,#2]&@@@Partition[uni,2,1]],
VertexCoordinates->coor,
EdgeShapeFunction->"Line",VertexSize->.3]
]

Usage that gives the image above:

LayersGraph[{2, 2, 3, 7, 2, 5, 3, 4, 1}]

A bit different version would go like:

LayersGraph[layers_]:=
Module[{
    vert=TakeList[Range[Total[layers]],layers],
    coor=Flatten[Table[{k,#}&/@(Range[#]-Mean[Range[#]]&/@layers)[[k]],{k,Length[layers]}],1]},
Graph[
    Flatten[vert],
    Flatten[Outer[Rule,#1,#2]&@@@Partition[vert,2,1]],
VertexCoordinates->coor,
EdgeShapeFunction->"Line",GraphStyle->"SmallNetwork"]
]

LayersGraph[{2, 2, 3, 7, 2, 5, 3, 4, 1}]

enter image description here

Vitaliy Kaurov
  • 73,078
  • 9
  • 204
  • 355
12

Using CompleteGraph with a list of layer sizes as input and deleting edges that connect non-consecutive layers:

ClearAll[layeredNW]
layeredNW[layers : {__}, opts : OptionsPattern[Graph]] := Module[{cg = 
    CompleteGraph[layers, DirectedEdges -> True]}, 
  SetProperty[EdgeDelete[cg, DirectedEdge[a_, b_] /; 
     (Subtract @@ (PropertyValue[{cg, #} , VertexCoordinates][[1]] & /@ {b, a}) > 1)], 
   {PerformanceGoal -> "Quality", VertexSize -> .5, VertexStyle -> White, 
    EdgeStyle -> Black, VertexCoordinates -> GraphEmbedding[cg], opts}]]

Example:

layers = {2, 5, 2, 3, 1, 3, 4, 1};
colors = Flatten[MapThread[ConstantArray,
      {ColorData[63, "ColorList"][[;; Length@layers]], layers}]];

layeredNW[layers, VertexStyle -> {i_ :> colors[[i]]}, ImageSize -> Large, VertexSize -> .3]

enter image description here

Stress-test

layers = {2, 5, 2, 3, 1, 3, 4, 1}*5;
colors = Flatten[
   MapThread[
    ConstantArray, {ColorData[63, "ColorList"][[;; Length@layers]], 
     layers}]];

layeredNW[layers, VertexStyle -> {i_ :> colors[[i]]}, 
 ImageSize -> Large, VertexSize -> .7, ImageSize -> 1600]

enter image description here

Anton Antonov
  • 37,787
  • 3
  • 100
  • 178
kglr
  • 394,356
  • 18
  • 477
  • 896
  • Thank you for posting your answer! Please consider coming up with other stress-tests than the one I posted. I tried with three-four times larger number of nodes than the image I posted. I think your and Szabolcs approaches handle larger specs of neural networks nodes better than the other posts. – Anton Antonov May 25 '19 at 21:44
4

I'm not sure that the code is clean, but once I had to make a graph that has a fixed size in all layers (except those with one node). I put it as is just for diversity reasons, may be it will inspire somebody.

nList = {2, 7, 9, 1}; (* nodes by layers *)

Main functions:

nEmbed[nList_List] := Module[{list, nRagged, ragged},
   nRagged[list_List] := 
    Internal`PartitionRagged[Range[Total@list], list];
   ragged = nRagged[nList];

(* returns flatten join of all 'matrixes of connections' between
layers *)

Flatten@(Outer[Rule, Sequence @@ #] & /@ Table[ragged[[i]], {i, Partition[Range[Length@nList], 2, 1]}]) ];

nCoord [nList_List] := Module[{list, nRagged, ragged, coordy, coordx, k = 3}, nRagged[list_List] := Internal`PartitionRagged[Range[Total@list], list]; ragged = nRagged[nList];

(* coordy reflects vertical layout, coordx uses the same function but can be corrected with k *)

coordy[rag_List /; Length[rag] == 1] := {5.}; coordy[rag_List /; Length[rag] > 1] := Table[i N[10./(Length[rag] - 1), 2], {i, 0, Length[rag] - 1}]; coordx = coordy;

(* coordx applies to the layers (and so nodes), but coordy applies to nodes in respective layers *)

Thread[Range[Total@nList] -> Flatten[(Thread /@ Thread[{k (coordx @ #), coordy /@ #}] &) @ ragged, 1]] ];

Usage:

Graph[nEmbed[#], VertexCoordinates -> nCoord[#], 
   VertexLabels -> "Name",
   VertexSize -> 0.3,
   EdgeShapeFunction -> "Line",
   ImageSize -> Large] &@ nList

enter image description here

For some sort of stress-test with nList = {8, 17, 19, 6, 4, 3};.

enter image description here

garej
  • 4,865
  • 2
  • 19
  • 42