11 YAML is a human readable data serialization language. The full YAML language
12 spec can be read at `yaml.org
13 <http://www.yaml.org/spec/1.2/spec.html#Introduction>`_. The simplest form of
14 yaml is just "scalars", "mappings", and "sequences". A scalar is any number
15 or string. The pound/hash symbol (#) begins a comment line. A mapping is
16 a set of key-value pairs where the key ends with a colon. For example:
24 A sequence is a list of items where each item starts with a leading dash ('-').
34 You can combine mappings and sequences by indenting. For example a sequence
35 of mappings in which one of the mapping values is itself a sequence:
39 # a sequence of mappings with one key's value being a sequence
52 Sometime sequences are known to be short and the one entry per line is too
53 verbose, so YAML offers an alternate syntax for sequences called a "Flow
54 Sequence" in which you put comma separated sequence elements into square
55 brackets. The above example could then be simplified to :
60 # a sequence of mappings with one key's value being a flow sequence
66 cpus: [ PowerPC, x86 ]
69 Introduction to YAML I/O
70 ========================
72 The use of indenting makes the YAML easy for a human to read and understand,
73 but having a program read and write YAML involves a lot of tedious details.
74 The YAML I/O library structures and simplifies reading and writing YAML
77 YAML I/O assumes you have some "native" data structures which you want to be
78 able to dump as YAML and recreate from YAML. The first step is to try
79 writing example YAML for your data structures. You may find after looking at
80 possible YAML representations that a direct mapping of your data structures
81 to YAML is not very readable. Often the fields are not in the order that
82 a human would find readable. Or the same information is replicated in multiple
83 locations, making it hard for a human to write such YAML correctly.
85 In relational database theory there is a design step called normalization in
86 which you reorganize fields and tables. The same considerations need to
87 go into the design of your YAML encoding. But, you may not want to change
88 your existing native data structures. Therefore, when writing out YAML
89 there may be a normalization step, and when reading YAML there would be a
90 corresponding denormalization step.
92 YAML I/O uses a non-invasive, traits based design. YAML I/O defines some
93 abstract base templates. You specialize those templates on your data types.
94 For instance, if you have an enumerated type FooBar you could specialize
95 ScalarEnumerationTraits on that type and define the enumeration() method:
99 using llvm::yaml::ScalarEnumerationTraits;
100 using llvm::yaml::IO;
103 struct ScalarEnumerationTraits<FooBar> {
104 static void enumeration(IO &io, FooBar &value) {
110 As with all YAML I/O template specializations, the ScalarEnumerationTraits is used for
111 both reading and writing YAML. That is, the mapping between in-memory enum
112 values and the YAML string representation is only in one place.
113 This assures that the code for writing and parsing of YAML stays in sync.
115 To specify a YAML mappings, you define a specialization on
116 llvm::yaml::MappingTraits.
117 If your native data structure happens to be a struct that is already normalized,
118 then the specialization is simple. For example:
122 using llvm::yaml::MappingTraits;
123 using llvm::yaml::IO;
126 struct MappingTraits<Person> {
127 static void mapping(IO &io, Person &info) {
128 io.mapRequired("name", info.name);
129 io.mapOptional("hat-size", info.hatSize);
134 A YAML sequence is automatically inferred if you data type has begin()/end()
135 iterators and a push_back() method. Therefore any of the STL containers
136 (such as std::vector<>) will automatically translate to YAML sequences.
138 Once you have defined specializations for your data types, you can
139 programmatically use YAML I/O to write a YAML document:
143 using llvm::yaml::Output;
151 std::vector<Person> persons;
152 persons.push_back(tom);
153 persons.push_back(dan);
155 Output yout(llvm::outs());
158 This would write the following:
167 And you can also read such YAML documents with the following code:
171 using llvm::yaml::Input;
173 typedef std::vector<Person> PersonList;
174 std::vector<PersonList> docs;
176 Input yin(document.getBuffer());
182 // Process read document
183 for ( PersonList &pl : docs ) {
184 for ( Person &person : pl ) {
185 cout << "name=" << person.name;
189 One other feature of YAML is the ability to define multiple documents in a
190 single file. That is why reading YAML produces a vector of your document type.
197 When parsing a YAML document, if the input does not match your schema (as
198 expressed in your XxxTraits<> specializations). YAML I/O
199 will print out an error message and your Input object's error() method will
200 return true. For instance the following document:
209 Has a key (shoe-size) that is not defined in the schema. YAML I/O will
210 automatically generate this error:
214 YAML:2:2: error: unknown key 'shoe-size'
218 Similar errors are produced for other input not conforming to the schema.
224 YAML scalars are just strings (i.e. not a sequence or mapping). The YAML I/O
225 library provides support for translating between YAML scalars and specific
231 The following types have built-in support in YAML I/O:
247 That is, you can use those types in fields of MappingTraits or as element type
248 in sequence. When reading, YAML I/O will validate that the string found
249 is convertible to that type and error out if not.
254 Given that YAML I/O is trait based, the selection of how to convert your data
255 to YAML is based on the type of your data. But in C++ type matching, typedefs
256 do not generate unique type names. That means if you have two typedefs of
257 unsigned int, to YAML I/O both types look exactly like unsigned int. To
258 facilitate make unique type names, YAML I/O provides a macro which is used
259 like a typedef on built-in types, but expands to create a class with conversion
260 operators to and from the base type. For example:
264 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyFooFlags)
265 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyBarFlags)
267 This generates two classes MyFooFlags and MyBarFlags which you can use in your
268 native data structures instead of uint32_t. They are implicitly
269 converted to and from uint32_t. The point of creating these unique types
270 is that you can now specify traits on them to get different YAML conversions.
274 An example use of a unique type is that YAML I/O provides fixed sized unsigned
275 integers that are written with YAML I/O as hexadecimal instead of the decimal
276 format used by the built-in integer types:
283 You can use llvm::yaml::Hex32 instead of uint32_t and the only different will
284 be that when YAML I/O writes out that type it will be formatted in hexadecimal.
287 ScalarEnumerationTraits
288 -----------------------
289 YAML I/O supports translating between in-memory enumerations and a set of string
290 values in YAML documents. This is done by specializing ScalarEnumerationTraits<>
291 on your enumeration type and define a enumeration() method.
292 For instance, suppose you had an enumeration of CPUs and a struct with it as
308 To support reading and writing of this enumeration, you can define a
309 ScalarEnumerationTraits specialization on CPUs, which can then be used
314 using llvm::yaml::ScalarEnumerationTraits;
315 using llvm::yaml::MappingTraits;
316 using llvm::yaml::IO;
319 struct ScalarEnumerationTraits<CPUs> {
320 static void enumeration(IO &io, CPUs &value) {
321 io.enumCase(value, "x86_64", cpu_x86_64);
322 io.enumCase(value, "x86", cpu_x86);
323 io.enumCase(value, "PowerPC", cpu_PowerPC);
328 struct MappingTraits<Info> {
329 static void mapping(IO &io, Info &info) {
330 io.mapRequired("cpu", info.cpu);
331 io.mapOptional("flags", info.flags, 0);
335 When reading YAML, if the string found does not match any of the strings
336 specified by enumCase() methods, an error is automatically generated.
337 When writing YAML, if the value being written does not match any of the values
338 specified by the enumCase() methods, a runtime assertion is triggered.
343 Another common data structure in C++ is a field where each bit has a unique
344 meaning. This is often used in a "flags" field. YAML I/O has support for
345 converting such fields to a flow sequence. For instance suppose you
346 had the following bit flags defined:
357 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyFlags)
359 To support reading and writing of MyFlags, you specialize ScalarBitSetTraits<>
360 on MyFlags and provide the bit values and their names.
364 using llvm::yaml::ScalarBitSetTraits;
365 using llvm::yaml::MappingTraits;
366 using llvm::yaml::IO;
369 struct ScalarBitSetTraits<MyFlags> {
370 static void bitset(IO &io, MyFlags &value) {
371 io.bitSetCase(value, "hollow", flagHollow);
372 io.bitSetCase(value, "flat", flagFlat);
373 io.bitSetCase(value, "round", flagRound);
374 io.bitSetCase(value, "pointy", flagPointy);
384 struct MappingTraits<Info> {
385 static void mapping(IO &io, Info& info) {
386 io.mapRequired("name", info.name);
387 io.mapRequired("flags", info.flags);
391 With the above, YAML I/O (when writing) will test mask each value in the
392 bitset trait against the flags field, and each that matches will
393 cause the corresponding string to be added to the flow sequence. The opposite
394 is done when reading and any unknown string values will result in a error. With
395 the above schema, a same valid YAML document is:
400 flags: [ pointy, flat ]
402 Sometimes a "flags" field might contains an enumeration part
403 defined by a bit-mask.
418 To support reading and writing such fields, you need to use the maskedBitSet()
419 method and provide the bit values, their names and the enumeration mask.
424 struct ScalarBitSetTraits<MyFlags> {
425 static void bitset(IO &io, MyFlags &value) {
426 io.bitSetCase(value, "featureA", flagsFeatureA);
427 io.bitSetCase(value, "featureB", flagsFeatureB);
428 io.bitSetCase(value, "featureC", flagsFeatureC);
429 io.maskedBitSetCase(value, "CPU1", flagsCPU1, flagsCPUMask);
430 io.maskedBitSetCase(value, "CPU2", flagsCPU2, flagsCPUMask);
434 YAML I/O (when writing) will apply the enumeration mask to the flags field,
435 and compare the result and values from the bitset. As in case of a regular
436 bitset, each that matches will cause the corresponding string to be added
437 to the flow sequence.
441 Sometimes for readability a scalar needs to be formatted in a custom way. For
442 instance your internal data structure may use a integer for time (seconds since
443 some epoch), but in YAML it would be much nicer to express that integer in
444 some time format (e.g. 4-May-2012 10:30pm). YAML I/O has a way to support
445 custom formatting and parsing of scalar types by specializing ScalarTraits<> on
446 your data type. When writing, YAML I/O will provide the native type and
447 your specialization must create a temporary llvm::StringRef. When reading,
448 YAML I/O will provide an llvm::StringRef of scalar and your specialization
449 must convert that to your native data type. An outline of a custom scalar type
454 using llvm::yaml::ScalarTraits;
455 using llvm::yaml::IO;
458 struct ScalarTraits<MyCustomType> {
459 static void output(const T &value, void*, llvm::raw_ostream &out) {
460 out << value; // do custom formatting here
462 static StringRef input(StringRef scalar, void*, T &value) {
463 // do custom parsing here. Return the empty string on success,
464 // or an error message on failure.
467 // Determine if this scalar needs quotes.
468 static bool mustQuote(StringRef) { return true; }
474 YAML block scalars are string literals that are represented in YAML using the
475 literal block notation, just like the example shown below:
483 The YAML I/O library provides support for translating between YAML block scalars
484 and specific C++ types by allowing you to specialize BlockScalarTraits<> on
485 your data type. The library doesn't provide any built-in support for block
486 scalar I/O for types like std::string and llvm::StringRef as they are already
487 supported by YAML I/O and use the ordinary scalar notation by default.
489 BlockScalarTraits specializations are very similar to the
490 ScalarTraits specialization - YAML I/O will provide the native type and your
491 specialization must create a temporary llvm::StringRef when writing, and
492 it will also provide an llvm::StringRef that has the value of that block scalar
493 and your specialization must convert that to your native data type when reading.
494 An example of a custom type with an appropriate specialization of
495 BlockScalarTraits is shown below:
499 using llvm::yaml::BlockScalarTraits;
500 using llvm::yaml::IO;
502 struct MyStringType {
507 struct BlockScalarTraits<MyStringType> {
508 static void output(const MyStringType &Value, void *Ctxt,
509 llvm::raw_ostream &OS) {
513 static StringRef input(StringRef Scalar, void *Ctxt,
514 MyStringType &Value) {
515 Value.Str = Scalar.str();
525 To be translated to or from a YAML mapping for your type T you must specialize
526 llvm::yaml::MappingTraits on T and implement the "void mapping(IO &io, T&)"
527 method. If your native data structures use pointers to a class everywhere,
528 you can specialize on the class pointer. Examples:
532 using llvm::yaml::MappingTraits;
533 using llvm::yaml::IO;
535 // Example of struct Foo which is used by value
537 struct MappingTraits<Foo> {
538 static void mapping(IO &io, Foo &foo) {
539 io.mapOptional("size", foo.size);
544 // Example of struct Bar which is natively always a pointer
546 struct MappingTraits<Bar*> {
547 static void mapping(IO &io, Bar *&bar) {
548 io.mapOptional("size", bar->size);
557 The mapping() method is responsible, if needed, for normalizing and
558 denormalizing. In a simple case where the native data structure requires no
559 normalization, the mapping method just uses mapOptional() or mapRequired() to
560 bind the struct's fields to YAML key names. For example:
564 using llvm::yaml::MappingTraits;
565 using llvm::yaml::IO;
568 struct MappingTraits<Person> {
569 static void mapping(IO &io, Person &info) {
570 io.mapRequired("name", info.name);
571 io.mapOptional("hat-size", info.hatSize);
579 When [de]normalization is required, the mapping() method needs a way to access
580 normalized values as fields. To help with this, there is
581 a template MappingNormalization<> which you can then use to automatically
582 do the normalization and denormalization. The template is used to create
583 a local variable in your mapping() method which contains the normalized keys.
585 Suppose you have native data type
586 Polar which specifies a position in polar coordinates (distance, angle):
595 but you've decided the normalized YAML for should be in x,y coordinates. That
596 is, you want the yaml to look like:
603 You can support this by defining a MappingTraits that normalizes the polar
604 coordinates to x,y coordinates when writing YAML and denormalizes x,y
605 coordinates into polar when reading YAML.
609 using llvm::yaml::MappingTraits;
610 using llvm::yaml::IO;
613 struct MappingTraits<Polar> {
615 class NormalizedPolar {
617 NormalizedPolar(IO &io)
620 NormalizedPolar(IO &, Polar &polar)
621 : x(polar.distance * cos(polar.angle)),
622 y(polar.distance * sin(polar.angle)) {
624 Polar denormalize(IO &) {
625 return Polar(sqrt(x*x+y*y), arctan(x,y));
632 static void mapping(IO &io, Polar &polar) {
633 MappingNormalization<NormalizedPolar, Polar> keys(io, polar);
635 io.mapRequired("x", keys->x);
636 io.mapRequired("y", keys->y);
640 When writing YAML, the local variable "keys" will be a stack allocated
641 instance of NormalizedPolar, constructed from the supplied polar object which
642 initializes it x and y fields. The mapRequired() methods then write out the x
643 and y values as key/value pairs.
645 When reading YAML, the local variable "keys" will be a stack allocated instance
646 of NormalizedPolar, constructed by the empty constructor. The mapRequired
647 methods will find the matching key in the YAML document and fill in the x and y
648 fields of the NormalizedPolar object keys. At the end of the mapping() method
649 when the local keys variable goes out of scope, the denormalize() method will
650 automatically be called to convert the read values back to polar coordinates,
651 and then assigned back to the second parameter to mapping().
653 In some cases, the normalized class may be a subclass of the native type and
654 could be returned by the denormalize() method, except that the temporary
655 normalized instance is stack allocated. In these cases, the utility template
656 MappingNormalizationHeap<> can be used instead. It just like
657 MappingNormalization<> except that it heap allocates the normalized object
658 when reading YAML. It never destroys the normalized object. The denormalize()
659 method can this return "this".
664 Within a mapping() method, calls to io.mapRequired() mean that that key is
665 required to exist when parsing YAML documents, otherwise YAML I/O will issue an
668 On the other hand, keys registered with io.mapOptional() are allowed to not
669 exist in the YAML document being read. So what value is put in the field
670 for those optional keys?
671 There are two steps to how those optional fields are filled in. First, the
672 second parameter to the mapping() method is a reference to a native class. That
673 native class must have a default constructor. Whatever value the default
674 constructor initially sets for an optional field will be that field's value.
675 Second, the mapOptional() method has an optional third parameter. If provided
676 it is the value that mapOptional() should set that field to if the YAML document
677 does not have that key.
679 There is one important difference between those two ways (default constructor
680 and third parameter to mapOptional). When YAML I/O generates a YAML document,
681 if the mapOptional() third parameter is used, if the actual value being written
682 is the same as (using ==) the default value, then that key/value is not written.
688 When writing out a YAML document, the keys are written in the order that the
689 calls to mapRequired()/mapOptional() are made in the mapping() method. This
690 gives you a chance to write the fields in an order that a human reader of
691 the YAML document would find natural. This may be different that the order
692 of the fields in the native class.
694 When reading in a YAML document, the keys in the document can be in any order,
695 but they are processed in the order that the calls to mapRequired()/mapOptional()
696 are made in the mapping() method. That enables some interesting
697 functionality. For instance, if the first field bound is the cpu and the second
698 field bound is flags, and the flags are cpu specific, you can programmatically
699 switch how the flags are converted to and from YAML based on the cpu.
700 This works for both reading and writing. For example:
704 using llvm::yaml::MappingTraits;
705 using llvm::yaml::IO;
713 struct MappingTraits<Info> {
714 static void mapping(IO &io, Info &info) {
715 io.mapRequired("cpu", info.cpu);
716 // flags must come after cpu for this to work when reading yaml
717 if ( info.cpu == cpu_x86_64 )
718 io.mapRequired("flags", *(My86_64Flags*)info.flags);
720 io.mapRequired("flags", *(My86Flags*)info.flags);
728 The YAML syntax supports tags as a way to specify the type of a node before
729 it is parsed. This allows dynamic types of nodes. But the YAML I/O model uses
730 static typing, so there are limits to how you can use tags with the YAML I/O
731 model. Recently, we added support to YAML I/O for checking/setting the optional
732 tag on a map. Using this functionality it is even possbile to support different
733 mappings, as long as they are convertable.
735 To check a tag, inside your mapping() method you can use io.mapTag() to specify
736 what the tag should be. This will also add that tag when writing yaml.
741 Sometimes in a yaml map, each key/value pair is valid, but the combination is
742 not. This is similar to something having no syntax errors, but still having
743 semantic errors. To support semantic level checking, YAML I/O allows
744 an optional ``validate()`` method in a MappingTraits template specialization.
746 When parsing yaml, the ``validate()`` method is call *after* all key/values in
747 the map have been processed. Any error message returned by the ``validate()``
748 method during input will be printed just a like a syntax error would be printed.
749 When writing yaml, the ``validate()`` method is called *before* the yaml
750 key/values are written. Any error during output will trigger an ``assert()``
751 because it is a programming error to have invalid struct values.
756 using llvm::yaml::MappingTraits;
757 using llvm::yaml::IO;
764 struct MappingTraits<Stuff> {
765 static void mapping(IO &io, Stuff &stuff) {
768 static StringRef validate(IO &io, Stuff &stuff) {
769 // Look at all fields in 'stuff' and if there
770 // are any bad values return a string describing
771 // the error. Otherwise return an empty string.
778 A YAML "flow mapping" is a mapping that uses the inline notation
779 (e.g { x: 1, y: 0 } ) when written to YAML. To specify that a type should be
780 written in YAML using flow mapping, your MappingTraits specialization should
781 add "static const bool flow = true;". For instance:
785 using llvm::yaml::MappingTraits;
786 using llvm::yaml::IO;
793 struct MappingTraits<Stuff> {
794 static void mapping(IO &io, Stuff &stuff) {
798 static const bool flow = true;
801 Flow mappings are subject to line wrapping according to the Output object
807 To be translated to or from a YAML sequence for your type T you must specialize
808 llvm::yaml::SequenceTraits on T and implement two methods:
809 ``size_t size(IO &io, T&)`` and
810 ``T::value_type& element(IO &io, T&, size_t indx)``. For example:
815 struct SequenceTraits<MySeq> {
816 static size_t size(IO &io, MySeq &list) { ... }
817 static MySeqEl &element(IO &io, MySeq &list, size_t index) { ... }
820 The size() method returns how many elements are currently in your sequence.
821 The element() method returns a reference to the i'th element in the sequence.
822 When parsing YAML, the element() method may be called with an index one bigger
823 than the current size. Your element() method should allocate space for one
824 more element (using default constructor if element is a C++ object) and returns
825 a reference to that new allocated space.
830 A YAML "flow sequence" is a sequence that when written to YAML it uses the
831 inline notation (e.g [ foo, bar ] ). To specify that a sequence type should
832 be written in YAML as a flow sequence, your SequenceTraits specialization should
833 add "static const bool flow = true;". For instance:
838 struct SequenceTraits<MyList> {
839 static size_t size(IO &io, MyList &list) { ... }
840 static MyListEl &element(IO &io, MyList &list, size_t index) { ... }
842 // The existence of this member causes YAML I/O to use a flow sequence
843 static const bool flow = true;
846 With the above, if you used MyList as the data type in your native data
847 structures, then when converted to YAML, a flow sequence of integers
848 will be used (e.g. [ 10, -3, 4 ]).
850 Flow sequences are subject to line wrapping according to the Output object
855 Since a common source of sequences is std::vector<>, YAML I/O provides macros:
856 LLVM_YAML_IS_SEQUENCE_VECTOR() and LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR() which
857 can be used to easily specify SequenceTraits<> on a std::vector type. YAML
858 I/O does not partial specialize SequenceTraits on std::vector<> because that
859 would force all vectors to be sequences. An example use of the macros:
863 std::vector<MyType1>;
864 std::vector<MyType2>;
865 LLVM_YAML_IS_SEQUENCE_VECTOR(MyType1)
866 LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR(MyType2)
873 YAML allows you to define multiple "documents" in a single YAML file. Each
874 new document starts with a left aligned "---" token. The end of all documents
875 is denoted with a left aligned "..." token. Many users of YAML will never
876 have need for multiple documents. The top level node in their YAML schema
877 will be a mapping or sequence. For those cases, the following is not needed.
878 But for cases where you do want multiple documents, you can specify a
879 trait for you document list type. The trait has the same methods as
880 SequenceTraits but is named DocumentListTraits. For example:
885 struct DocumentListTraits<MyDocList> {
886 static size_t size(IO &io, MyDocList &list) { ... }
887 static MyDocType element(IO &io, MyDocList &list, size_t index) { ... }
893 When an llvm::yaml::Input or llvm::yaml::Output object is created their
894 constructors take an optional "context" parameter. This is a pointer to
895 whatever state information you might need.
897 For instance, in a previous example we showed how the conversion type for a
898 flags field could be determined at runtime based on the value of another field
899 in the mapping. But what if an inner mapping needs to know some field value
900 of an outer mapping? That is where the "context" parameter comes in. You
901 can set values in the context in the outer map's mapping() method and
902 retrieve those values in the inner map's mapping() method.
904 The context value is just a void*. All your traits which use the context
905 and operate on your native data types, need to agree what the context value
906 actually is. It could be a pointer to an object or struct which your various
907 traits use to shared context sensitive information.
913 The llvm::yaml::Output class is used to generate a YAML document from your
914 in-memory data structures, using traits defined on your data types.
915 To instantiate an Output object you need an llvm::raw_ostream, an optional
916 context pointer and an optional wrapping column:
920 class Output : public IO {
922 Output(llvm::raw_ostream &, void *context = NULL, int WrapColumn = 70);
924 Once you have an Output object, you can use the C++ stream operator on it
925 to write your native data as YAML. One thing to recall is that a YAML file
926 can contain multiple "documents". If the top level data structure you are
927 streaming as YAML is a mapping, scalar, or sequence, then Output assumes you
928 are generating one document and wraps the mapping output
929 with "``---``" and trailing "``...``".
931 The WrapColumn parameter will cause the flow mappings and sequences to
932 line-wrap when they go over the supplied column. Pass 0 to completely
933 suppress the wrapping.
937 using llvm::yaml::Output;
939 void dumpMyMapDoc(const MyMapType &info) {
940 Output yout(llvm::outs());
944 The above could produce output like:
953 On the other hand, if the top level data structure you are streaming as YAML
954 has a DocumentListTraits specialization, then Output walks through each element
955 of your DocumentList and generates a "---" before the start of each element
956 and ends with a "...".
960 using llvm::yaml::Output;
962 void dumpMyMapDoc(const MyDocListType &docList) {
963 Output yout(llvm::outs());
967 The above could produce output like:
982 The llvm::yaml::Input class is used to parse YAML document(s) into your native
983 data structures. To instantiate an Input
984 object you need a StringRef to the entire YAML file, and optionally a context
989 class Input : public IO {
991 Input(StringRef inputContent, void *context=NULL);
993 Once you have an Input object, you can use the C++ stream operator to read
994 the document(s). If you expect there might be multiple YAML documents in
995 one file, you'll need to specialize DocumentListTraits on a list of your
996 document type and stream in that document list type. Otherwise you can
997 just stream in the document type. Also, you can check if there was
998 any syntax errors in the YAML be calling the error() method on the Input
1003 // Reading a single document
1004 using llvm::yaml::Input;
1006 Input yin(mb.getBuffer());
1008 // Parse the YAML file
1019 // Reading multiple documents in one file
1020 using llvm::yaml::Input;
1022 LLVM_YAML_IS_DOCUMENT_LIST_VECTOR(std::vector<MyDocType>)
1024 Input yin(mb.getBuffer());
1026 // Parse the YAML file
1027 std::vector<MyDocType> theDocList;