Samples 例子

next

The Wave library contains several samples illustrating how to use the different features. This section describes these samples and its main characteristics.
Wave 庫帶有幾個例子,示範了如何使用不同的特性。本節將對這些例子及其主要特徵進行說明。

The quick_start sample

The quick_start sample shows a minimal way to use the Wave preprocessor library. It simply opens the file given as the first command line argument, preprocesses it assuming that there aren't any additional include paths or macros defined and outputs the textual representation of the tokens generated from the given input file. This sample may be used to introduce yourself to Wave, because it does not contain all the potential additional complexity exposed by more complex samples.
例子 quick_start 示範了使用 Wave 預處理器庫的最簡單方法。它只是打開由第一個命令行參數給定的文件,對其進行預處理,假定沒有任何包含路徑或宏定義,然後將從給定的輸入文件生成的單詞的文字表示輸出。這個例子可用於將你引入到 Wave,因為它不含有其它更複雜的例子所暴露出來的潛在的其它複雜性。

The lexed_tokens sample

The lexed_tokens sample shows a minimal way to use the C++ lexing component of Wave without using the preprocessor. It opens the file specified as the first command line argument and prints out the contents of the tokens returned from the lexer.
例子 lexed_tokens 示範了不使用預處理器而使用Wave的C++ lexing組件的最簡單方法。它打開由第一個命令行參數所指定的文件,然後打印由lexer所返回的單詞的內容。

The cpp_tokens sample

The cpp_tokens sample dumps out the information contained within the tokens returned from the iterator supplied by the Wave library. It shows, how to use the Wave library in conjunction with custom lexer and custom token types. The lexer used within this sample is SLex [5] based, i.e. it is feeded during runtime (at startup) with the token definitions (regular expressions) and generates a resulting DFA table. This table is used for token identification and is saved to disc afterwards to avoid the table generation process at the next program startup. The name of the file to which the DFA table is saved is wave_slex_lexer.dfa.
例子 cpp_tokens 輸出單詞內部所含的信息,這些單詞是從 Wave 庫所提供的迭代器返回的。它示範了如何將 Wave 庫與定制的lexer和定制的單詞類型一起使用。該例子中使用的lexer是基於 SLex [5]的,即它是在運行期(啟動時)裝入單詞定義(正則表達式)並生成DFA表格的。這個表格用於單詞識別,並被保存到磁盤上,以避免下一次程序啟動時重新生成。保存DFA表格的文件名為 wave_slex_lexer.dfa.

The main advantage of this SLex based lexer if compared to the default Re2C [3] generated lexer is, that it provides not only the line information, where a particular token was recognized, but also the related column position. Otherwise the SLex based lexer is functionally fully compatible to the Re2C based one, i.e. you always may switch your application to use it, if you additionally need to get the column information back from the preprocessing.
這個基於 SLex 的lexer與缺省的 Re2C [3] 生成的lexer相比,其最大優點是,不僅提供了識別出特定單詞的行信息,還有相關的列位置。除此以外,基於 SLex 的lexer在功能上完全兼容於基於 Re2C 的lexer,即如果你需要從預處理過程取回列信息,隨時可以將你的應用轉換為使用它。

Since no additional command line parameters are supported by this sample, it won't work well with include files, which aren't located in the same directory as the inspected input file. The command line syntax is straight forward:
由於這個例子不支持更多的命令行參數,所以它不能處理不是和輸入文件在同一目錄下的頭文件。其命令行語法非常簡單:

    cpp_tokens input_file

The list_includes sample

The list_includes sample shows how the Wave library may be used to generate a include file dependency list for a particular input file. It completely depends on the default library configuration. The command line syntax for this sample is given below:
例子 list_includes 示範了如何使用 Wave 庫為特定的輸入文件生成一個頭文件依賴列表。它完全取決於缺省的庫配置。這個例子的命令行語法給出如下:

    Usage: list_includes [options] file ...:
-h [ --help ] : print out program usage (this message)
-v [ --version ] : print the version number
-I [ --path ] dir : specify additional include directory
-S [ --syspath ] dir : specify additional system include directory

Please note though, that this sample will output only those include file names, which are visible to the preprocessor, i.e. given the following code snippet, only one of the two include file directives is triggered during preprocessing and for this reason only the corresponding file name is reported by the list_includes sample:
不過請注意,該例子只輸出哪些預處理器可見的頭文件的名字,如,給定以下代碼片斷,在預處理過程中,只有一條包含文件指令會被觸發,因此只有對應的那一個文件名會被 list_includes 例子報告:

    #if defined(INCLUDE_FILE_A)
    #  include "file_a.h" 
#else # include "file_b.h" #endif

The advanced_hooks sample

The advanced_hooks sample is based on the quick_start sample mentioned above. It shows how you may want to use the advanced preprocessing hooks of the Wave library to get in the output not only the preprocessed tokens from the evaluated conditional blocks, but also the tokens recognized inside the non-evaluated conditional blocks. To make the generated token stream useful for further processing the tokens from the non-evaluated conditional blocks are commented out.
例子 advanced_hooks 是基於前面的 quick_start 例子的。它示範了如何使用 Wave 庫的高級預處理鉤子,不僅從被求值的條件塊中獲取預經過處理的單詞,還可以獲得未被求值的條件塊內部識別到的單詞。為了讓生成的單詞流可用於進一步的處理,從未被求值的條件塊中獲得的單詞會被註釋掉。

Here is a small sample what the advanced_hooks sample does. Consider the following input:
以下是 advanced_hooks 處理的一個小例子。考慮以下輸入:

    #define TEST 1
#if defined(TEST)
"TEST was defined: " TEST
#else "TEST was not defined!" #endif

which will produce as its output:
將產生以下輸出:

    //"#if defined(TEST)
    "TEST was defined: " 1
//"#else //"TEST was not defined!" //"#endif

As you can see, the sample application prints out the conditional directives in a commented out manner as well.
如你所見,這個例子程序以註釋的方式打印出條件指令。

The wave sample

Because of its general usefulness the wave sample is not located in the sample directory of the library, but inside the tools directory of Boost. The wave sample is usable as a full fledged preprocessor executable on top of any other C++ compiler. It outputs the textual representation of the preprocessed tokens generated from a given input file. It is described in more details here.
由於其通用性,這個 wave 例子沒有被放在本庫的例子目錄中,而是位於Boost的工具目錄中。這個wave例子被用作一個可以在任意C++編譯器之上執行的正式的預處理器。它輸出從一個給定的輸入文件生成的預處理單詞的文字表示。更多的細節在說明。

The waveidl sample

The main point of the waveidl sample is to show, how a completely independent lexer type may be used in conjunction with the default token type of the Wave library. The lexer used in this sample is supposed to be used for an IDL language based preprocessor. It is based on the Re2C tool too, but recognizes a different set of tokens as the default C++ lexer contained within the Wave library. So this lexer does not recognize any keywords (except true and false, which are needed by the preprocessor itself). This is needed because there exist different IDL languages, where identifiers of one language may be keywords of others. Certainly this implies to postpone keyword identification after the preprocessing, but allows to use Wave for all of the IDL derivatives.
例子 waveidl 的重點是,示範如何將一個完全獨立的lexer類型與 Wave 庫的缺省單詞類型一起使用。在這個例子中使用的lexer被假定為要用於一個基於IDL語言的預處理器。它也是基於 Re2C 工具的,不過識別的是與 Wave 庫所含的缺省C++ lexer不同的單詞集。因此這個lexer不識別任何關鍵字(除了 truefalse,它是預處理器本身需要的)。這是必須的,因為存在著不同的IDL語言,某種語言的標識符可能是另一種語言的關鍵字。當然,這意味著將關鍵字的識別推遲至預處理之後,不過這樣可以允許將 Wave 用於所有IDL指令字。

It is only possible to use the Wave library to write an IDL preprocessor, because the token sets for both languages are very similar. The tokens to be recognized by the waveidl IDL language preprocessor is nearly a complete subset of the full C++ token set.
用 Wave 庫來編寫一個IDL預處理器只是一種可能,因為兩種語言的單詞集非常相似。waveidl IDL語言預處理器所識別的單詞基本上是整個C++單詞集的一個完全子集。

The command line syntax usable for this sample is shown below:
這個例子所用的命令行語法如下:

  Usage: waveidl [options] [@config-file(s)] file:


Options allowed on the command line only:
-h [ --help ] : print out program usage (this message)
-v [ --version ] : print the version number
-c [ --copyright ] : print out the copyright statement
--config-file filepath : specify a config file (alternatively: @filepath)


Options allowed additionally in a config file:
-o [ --output ] path : specify a file to use for output instead of stdout
-I [ --include ] path : specify an additional include directory
-S [ --sysinclude ] syspath : specify an additional system include directory
-D [ --define ] macro[=[value]] : specify a macro to define
-P [ --predefine ] macro[=[value]] : specify a macro to predefine
-U [ --undefine ] macro : specify a macro to undefine

The hannibal sample

The hannibal sample shows how to base a spirit grammar on the Wave library. It was initially written and contributed to the Wave library by Danny Havenith (see his related web page here). The grammar of this example uses Wave as its preprocessor. It implements around 120 of the approximately 250 grammar rules as they can be found in The C++ Programming Language, Third Edition. The 120 rules allow a C++ source file to be parsed for all type information and declarations. In fact this grammar parses as good as anything, it parses C++ declarations, including class and template definitions, but skips function bodies. If so configured, the program will output an xml dump of the generated parse tree.
例子 hannibal 示範了如何將一個spirit語法基於 Wave 庫。它最初是由 Danny Havenith (相關主頁請見 這裡)編寫並貢獻給 Wave 庫的。這個例子的語法用 Wave 作為它的預處理器。它實現了 The C++ Programming Language, Third Edition 中大約250個語法規則中的約120個。這120個規則可以對一個C++源文件的所有類型信息和聲明進行分析。事實上,這個語法分析與其它東西一樣好,它 分析了C++聲明,包括類和模板的定義,不過卻跳過了函數體。按此配置,該程序將對所生成的分析樹輸出一個xml dump。

It may be a good starting point for a grammar that can be used for things like reverse engineering as some UML modelling tools do. Or whatever use you may find for a grammar that gives you a list of all templates and classes in a file and their members.
它可能是某種語法的一個好的起點,這種語法可以用於象某些UML建模工具所做的逆向工程那樣的事情。或者用於你發現的某種語法,可以讓你列出一個文件中的所有模板和類及其成員。

next