2008年2月26日星期二

How to use "for/" batch command in Dos extention

dos for 扩展...

[font=Verdana][size=2]以前常觉得DOS的 命令行功能太弱,无法象UNIX一样可以用命令行完成非常复杂的操作。实际上,当MS从WIN2K开始将命令行增强后,已经借鉴了相当多UNIX的优点, 虽然还无法做到象UNIX那么灵活,但已可完成绝大多数的任务,比如用&&和||连接两个(或更多)命令,由前一个的返回值来决定下一个 是否执行,等等。而在这些增强中,最明显的,就是FOR命令。

举个例子,用适当的参数,可用FOR命令将 date /t 的输出 从 "Sat 07/13/2002" 变成你想要的格式,比如, "2002-07-13":

c:>for /f "tokens=2,3,4 delims=/ " %a in ('date /t') do @echo %c-%a-%b      2002-07-13

该例将在(3)中详细说明。

0. 基本应用

简单说,FOR是个循环,可以用你指定的循环范围生成一系列命令。最简单的例子,就是人工指定循环范围,然后对每个值执行指定的命令。例如,想快速报告每个硬盘分区的剩余空间:

      for %a in (c: d: e: f:) do @dir %a|find "可用字节"

将输出:

                     8 Dir(s)       1,361,334,272 bytes free
                    15 Dir(s)       8,505,581,568 bytes free
                    12 Dir(s)      12,975,149,056 bytes free
                     7 Dir(s)      11,658,854,400 bytes free 

用它可以使一些不支持通配符的命令对一系列文件进行操作。在WIN9X中,TYPE命令(显示文件内容)是不支持*.txt这种格式的(WIN2K开始TYPE已支持通配)。遇到类似情况就可以用FOR:

      for %a in (*.txt) do type %a

这些还不是FOR最强大的功能。我认为它最强大的功能,表现在以下这些高级应用:

1. 可以用 /r 参数遍历整个目录树
2. 可以用 /f 参数将文本文件内容作为循环范围
3. 可以用 /f 参数将某一命令执行结果作为循环范围
4. 可以用 %~ 操作符将文件名分离成文件名、扩展名、盘符等独立部分


现分别举例说明如下:

1. 用 /r 遍历目录树

当用 *.* 或 *.txt 等文件名通配符作为 for /r 的循环范围时,可以对当前目录下所有文件(包括子目录里面的文件)进行操作。举个例子,你想在当前目录的所有txt文件(包括子目录)内容中查找"bluebear"字样,但由于find本身不能遍历子目录,所以我们用for:

      for /r . %a in (*.txt) do @find "bluebear" %a 

find 前面的 @ 只是让输出结果不包括 find 命令本身。这是DOS很早就有的功能。和FOR无关。

当用 . 作为循环范围时,for 只将子目录的结构(目录名)作为循环范围,而不包括里面的文件。有点象 TREE 命令,不过侧重点不同。TREE 的重点是用很漂亮易读的格式输出,而FOR的输出适合一些自动任务,例如,我们都知道用CVS管理的项目中,每个子目录下都会有一个CVS目录,有时在软件发行时我们想把这些CVS目录全部去掉:

      for /r . %a in (.) do @if exist %aCVS rd /s /q %aCVS 

先用 if exist 判断一下,是因为 for 只是机械的对每个目录进行列举,如果有些目录下面没有CVS也会被执行到。用 if exist 判断一下比较安全。

这种删除命令威力太大,请小心使用。最好是在真正执行以上的删除命令前,将 rd /s /q 换成 @echo 先列出要删出的目录,确认无误后再换回rd /s /q:

      for /r . %a in (.) do @if exist %aCVS @echo %aCVS 

可能目录中会多出一层 ".",比如 c:projrelease.CVS ,但不会影响命令的执行效果。

2. 将某一文件内容或命令执行结果作为循环范围:

假如你有一个文件 todel.txt,里面是所有要删除的文件列表,现在你想将里面列出的每个文件都删掉。假设这个文件是每个文件名占一行,象这样:

      c:tempa1.txt      c:tempa2.txt      c:tempsubdirb3.txt      c:tempsubdirb4.txt 

那么可以用FOR来完成:

      for /f %a in (todel.txt) do del %a 

这个命令还可以更强大。比如你的 todel.txt 并不是象上面例子那么干净,而是由DIR直接生成,有一些没用的信息,比如这样:

       Volume in drive D is DATA
       Volume Serial Number is C47C-9908

         Directory of D:tmp

        09/26/2001      12:50 PM                18,426 alg0925.txt
        12/02/2001      04:29 AM                   795 bsample.txt
        04/11/2002      04:18 AM                 2,043 invitation.txt
                     4 File(s)             25,651 bytes
                     0 Dir(s)       4,060,700,672 bytes free 

for 仍然可以解出其中的文件名并进行操作:

      for /f "skip=5 tokens=5" %a in (todel.txt) do @if exist %a DEL %a 

当然,上面这个命令是在进行删除,如果你只是想看看哪些文件将被操作,把DEL换成echo:

      for /f "skip=5 tokens=5" %a in (todel.txt) do @if exist %a echo %a 

你将看到:

      alg0925.txt
      bsample.txt
      invitation.txt 

skip =5表示跳过前5行(就是DIR输出的头部信息),tokens=5表示将每行的第5列作为循环值放入%a,正好是文件名。在这里我加了一个文件存在判 断,是因为最后一行的"free"刚好也是第5列,目前还想不出好的办法来滤掉最后两行,所以检查一下可保万无一失。

3. 可以用 /f 参数将某一命令执行结果作为循环范围

非常有用的功能。比如,我们想知道目前的环境变量有哪些名字(我们只要名字,不要值)。可是SET命令的输出是"名字=值"的格式,现在可以用FOR来只取得名字部分:

      FOR /F "delims==" %i IN ('set') DO @echo %i 

将看到:

      ALLUSERSPROFILE
      APPDATA
      CLASSPATH
      CommonProgramFiles
      COMPUTERNAME
      ComSpec
      dircmd
      HOMEDRIVE      ...... 

这里是将set命令执行的结果拿来作为循环范围。delims==表示用=作为分隔符,由于FOR /F默认是用每行第一个TOKEN,所以可以分离出变量名。如果是想仅列出值:

      FOR /F "delims== tokens=2" %i IN ('set') DO @echo %i 

tokens=2和前例相同,表示将第二列(由=作为分隔符)作为循环值。

再来个更有用的例子:

我们知道 date /t (/t表示不要询问用户输入)的输出是象这样的:

      Sat 07/13/2002 

现在我想分离出日期部分,也就是13:

      for /f "tokens=3 delims=/ " %a in ('date /t') do @echo %a 

实际上把 tokens后面换成1,2,3或4,你将分别得到Sat, 07, 13和2002。注意delims=/后面还有个空格,表示/和空格都是分隔符。由于这个空格delims必须是/f选项的最后一项。

再灵活一点,象本文开头提到的,将日期用2002-07-13的格式输出:

      for /f "tokens=2,3,4 delims=/ " %a in ('date /t') do @echo %c-%a-%b 

当tokens后跟多个值时,将分别映射到%a, %b, %c等。实际上跟你指定的变量有关,如果你指定的是 %i, 它们就会用%i, %j, %k等。

灵活应用这一点,几乎没有做不了的事。

4. 可以用 %~ 操作符将文件名分离成文件名、扩展名、盘符等独立部分

这个比较简单,就是说将循环变量的值自动分离成只要文件名,只要扩展名,或只要盘符等等。

例:要将 c:mp3下所有mp3的歌名列出,如果用一般的 dir /b/s 或 for /r ,将会是这样:

      g:mp3Archived5-18-01-A游鸿明-下沙游鸿明-01 下沙.mp3
      g:mp3Archived5-18-01-A游鸿明-下沙游鸿明-02 21个人.mp3
      ......
      g:mp3Archived5-18-01-A王菲-寓言王菲-阿修罗.mp3
      g:mp3Archived5-18-01-A王菲-寓言王菲-彼岸花.mp3
      g:mp3Archived5-18-01-A王菲-寓言王菲-不爱我的我不爱.mp3
      ...... 

如果我只要歌名(不要路径和".mp3"):

      游鸿明-01 下沙
      游鸿明-02 21个人
      ......
      王菲-阿修罗
      王菲-彼岸花
      王菲-不爱我的我不爱
      ...... 

那么可以用FOR命令:

      for /r g:mp3 %a in (*.mp3) do @echo %~na 

凡是 %~ 开头的操作符,都是文件名的分离操作。具体请看 for /? 帮助。

本文举的例子有些可能没有实际用处,或可用其它办法完成。仅用于体现FOR可以不借助其它工具,仅用DOS命令组合,就可完成相当灵活的任务。

具体请看 for /? 帮助

For命令具体解释

FOR %variable IN (set) DO command [command-parameters]

      %variable 指定一个单一字母可替换的参数。
      (set) 指定一个或一组文件。可以使用通配符。
      command 指定对每个文件执行的命令。
      command-parameters
                 为特定命令指定参数或命令行开关。

在批处理文件中使用 FOR 命令时,指定变量请使用 %%variable 而不要用 %variable。变量名称是区分大小写的,所以 %i 不同于 %I.

如果命令扩展名被启用,下列额外的 FOR 命令格式会受到支持:

FOR /D %variable IN (set) DO command [command-parameters]

如果集中包含通配符,则指定与目录名匹配,而不与文件名匹配。

FOR /R [[drive:]path] %variable IN (set) DO command [command-parameters]

检查以 [drive:]path 为根的目录树,指向每个目录中的FOR 语句。如果在 /R 后没有指定目录,则使用当前目录。如果集仅为一个单点(.)字符,则枚举该目录树。

FOR /L %variable IN (start,step,end) DO command [command-parameters]

该集表示以增量形式从开始到结束的一个数字序列。因此,(1,1,5) 将产生序列 1 2 3 4 5,(5,-1,1) 将产生序列 (5 4 3 2 1)。

FOR /F ["options"] %variable IN (file-set) DO command [command-parameters]
FOR /F ["options"] %variable IN ("string") DO command [command-parameters]
FOR /F ["options"] %variable IN ('command') DO command [command-parameters]

或者,如果有 usebackq 选项:

FOR /F ["options"] %variable IN (file-set) DO command [command-parameters]
FOR /F ["options"] %variable IN ("string") DO command [command-parameters]
FOR /F ["options"] %variable IN ('command') DO command [command-parameters]

filenameset 为一个或多个文件名。继续到 filenameset 中的下一个文件之前,每份文件都已被打开、读取并经过处理。处理包括读取文件,将其分成一行行的文字,然后将每行解析成零或更多的符号。然后用已找到的符号字符串变量值调用 For 循环。以默认方式,/F 通过每个文件的每一行中分开的第一个空白符号。跳过空白行。您可通过指定可选 "options" 参数替代默认解析操作。这个带引号的字符串包括一个或多个指定不同解析选项的关键字。这些关键字为:

            eol=c - 指一个行注释字符的结尾(就一个)
            skip=n - 指在文件开始时忽略的行数。
            delims=xxx - 指分隔符集。这个替换了空格和跳格键的
                              默认分隔符集。
            tokens=x,y,m-n - 指每行的哪一个符号被传递到每个迭代
                              的 for 本身。这会导致额外变量名称的分配。m-n
                              格式为一个范围。通过 nth 符号指定 mth。如果
                              符号字符串中的最后一个字符星号,
                              那么额外的变量将在最后一个符号解析之后
                              分配并接受行的保留文本。
            usebackq - 指定新语法已在下类情况中使用:
                              在作为命令执行一个后引号的字符串并且一个单
                              引号字符为文字字符串命令并允许在 filenameset
                              中使用双引号扩起文件名称。

某些范例可能有助:

FOR /F "eol=; tokens=2,3* delims=, " %i in (myfile.txt) do @echo %i %j %k

会分析 myfile.txt 中的每一行,忽略以分号打头的那些行,将每行中的第二个和第三个符号传递给 for 程序体;用逗号和/或空格定界符号。请注意,这个 for 程序体的语句引用 %i 来取得第二个符号,引用 %j 来取得第三个符号,引用 %k 来取得第三个符号后的所有剩余符号。对于带有空格的文件名,您需要用双引号将文件名括起来。为了用这种方式来使用双引号,您还需要使用 usebackq 选项,否则,双引号会被理解成是用作定义某个要分析的字符串的。

%i 专门在 for 语句中得到说明,%j 和 %k 是通过 tokens= 选项专门得到说明的。您可以通过 tokens= 一行指定最多 26 个符号,只要不试图说明一个高于字母 'z' 或 'Z' 的变量。请记住,FOR 变量是单一字母、分大小写和全局的;而且,同时不能有 52 个以上都在使用中。

您还可以在相邻字符串上使用 FOR /F 分析逻辑;方法是,用单引号将括号之间的 filenameset 括起来。这样,该字符串会被当作一个文件中的一个单一输入行。

最后,您可以用 FOR /F 命令来分析命令的输出。方法是,将括号之间的 filenameset 变成一个反括字符串。该字符串会被当作命令行,传递到一个子 CMD.EXE,其输出会被抓进内存,并被当作文件分析。因此,以下例子:

          FOR /F "usebackq delims==" %i IN (`set`) DO @echo %i

会枚举当前环境中的环境变量名称。

另外,FOR 变量参照的替换已被增强。您现在可以使用下列选项语法:

         ~I - 删除任何引号("),扩充 %I
         %~fI - 将 %I 扩充到一个完全合格的路径名
         %~dI - 仅将 %I 扩充到一个驱动器号
         %~pI - 仅将 %I 扩充到一个路径
         %~nI - 仅将 %I 扩充到一个文件名
         %~xI - 仅将 %I 扩充到一个文件扩展名
         %~sI - 扩充的路径只含有短名
         %~aI - 将 %I 扩充到文件的文件属性
         %~tI - 将 %I 扩充到文件的日期/时间
         %~zI - 将 %I 扩充到文件的大小
         %~$PATH:I - 查找列在路径环境变量的目录,并将 %I 扩充
                       到找到的第一个完全合格的名称。如果环境变量名
                       未被定义,或者没有找到文件,此组合键会扩充到
                       空字符串

可以组合修饰符来得到多重结果:

         %~dpI - 仅将 %I 扩充到一个驱动器号和路径
         %~nxI - 仅将 %I 扩充到一个文件名和扩展名
         %~fsI - 仅将 %I 扩充到一个带有短名的完整路径名
         %~dp$PATH:i - 查找列在路径环境变量的目录,并将 %I 扩充
                       到找到的第一个驱动器号和路径。
         %~ftzaI - 将 %I 扩充到类似输出线路的 DIR

在以上例子中,%I 和 PATH 可用其他有效数值代替。%~ 语法用一个有效的 FOR 变量名终止。选取类似 %I 的大写变量名比较易读,而且避免与不分大小写的组合键混[/size][/font]
========================
http://iihero.cn
Welcome to iihero lab.
========================
Regards,
Sean.
       ▁▁▁▁▁    
     ▕ █ ██  ▏ 
 ▕▔▔          ▔▔\ 
 ▕═╭╮══╭╮══| 
   ▔╰╯▔▔╰╯▔▔o
  ▔▔▔▔▔▔▔▔▔▔

2008年2月20日星期三

authenticate_parameters connection event in ASA10(SQL Anywhere10.0.1)

Receives values from the remote that can be used to authenticate beyond a user ID and password. The values can also be used to arbitrarily customize each synchronization.

Parameters

In the following table, the description provides the SQL data type. If you are writing your script in Java or .NET, you should use the appropriate corresponding data type. See SQL-Java data types and SQL-.NET data types.

In SQL scripts, you can specify event parameters by name or with a question mark, but you cannot mix names and question marks within a script. If you use question marks, the parameters must be in the order shown below and are optional only if no subsequent parameters are specified (for example, you must use parameter 1 if you want to use parameter 2). If you use named parameters, you can specify any subset of the parameters in any order.

Parameter name for SQL scripts

Description

Order

s.authentication_status

INTEGER. This is an INOUT parameter.

1

s.remote_id VARCHAR(128). The MobiLink remote ID. You can only reference the remote ID if you are using named parameters. Not applicable

s.username

VARCHAR(128). The MobiLink user name.

2

a.N (one or more)

VARCHAR(128). For example, named parameters could be a.1 a.2.

3...

Parameter Description
  • authentication_status  The authentication_status parameter is required. It indicates the overall success of the authentication, and can be set to one of the following values:

    Returned Value

    authentication_status

    Description

    V <= 1999

    1000

    Authentication succeeded.

    1999 < V <= 2999

    2000

    Authentication succeeded, but password expiring soon.

    2999 < V <= 3999

    3000

    Authentication failed as password has expired.

    3999 < V <= 4999

    4000

    Authentication failed.

    4999 < V <= 5999

    5000

    Authentication failed as user is already synchronizing.

    5999 < V

    4000

    If the returned value is greater than 5999, MobiLink interprets it as a returned value of 4000 (authentication failed).

  • username  This parameter is the MobiLink user name. VARCHAR(128).

  • remote_ID   The MobiLink remote ID. You can only reference the remote ID if you are using named parameters.

    See Using remote IDs and MobiLink user names in scripts.

  • remote_parameters  The number of remote parameters must match the number expected or an error results. An error also occurs if parameters are sent from the client and there is no script for this event.

Remarks

You can send strings (or parameters in the form of strings) from both SQL Anywhere and UltraLite clients. This allows you to have authentication beyond a user ID and password. It also means that you can customize your synchronization based on the value of parameters, and do this in a pre-synchronization phase, during authentication.

The MobiLink server executes this event upon starting each synchronization. It is executed in the same transaction as the authenticate_user event.

You can use this event to replace the built-in MobiLink authentication mechanism with a custom mechanism. You may want to call into the authentication mechanism of your DBMS, or you may want to implement features not present in the MobiLink built-in mechanism.

If the authenticate_user or authenticate_user_hashed scripts are invoked and return an error, this event is not called.

SQL scripts for the authenticate_parameters event must be implemented as stored procedures.

我的hexdb manager简单界面

使用SQLite-JDBC的时候,碰到了命令行方式下和java-jdbc方式下插入数据相互读取出现乱码的情况。

现在还在完善,想做一个通用的可以管理常见的几种数据库。
同时也能实现常用的ETL功能。

Handling MobiLink server errors in Java through Implementing LogListener Interface

When scanning the log is not sufficient, you can monitor your applications programmatically. For example, you can send messages of a certain type in an email.
You can write methods that are passed a class representing every error or warning message that is printed to the log. This may help you monitor and audit a MobiLink server.
The following code installs a LogListener for all warning messages, and writes the information to a file.



class TestLogListener implements LogListener {
  FileOutputStream _out_file;
  public TestLogListener( FileOutputStream out_file ) {
    _out_file       = out_file;
  }

  public void messageLogged(  ServerContext   sc,
    LogMessage msg ) {
    String  type;
    String  user;
    try {
      if(msg.getType() == LogMessage.ERROR) {
        type = "ERROR";
      } else if(msg.getType() == LogMessage.WARNING) {
        type = "WARNING";
      } else {
        type = "UNKNOWN!!!";
      }

      user = msg.getUser();
      if( user == null ) {
        user = "NULL";
      }
      _out_file.write(
        ("Caught msg type=" + type +
         " user=" + user +
         " text=" +msg.getText() +
         "\n").getBytes() );
      _out_file.flush();
    } catch( Exception e ) {
      // Print some error output to the MobiLink log.
      e.printStackTrace();
    }
  }
}

 




The following code registers TestLogListener to receive warning messages. Call this code from anywhere that has access to the ServerContext such as a class constructor or synchronization script.
// ServerContext serv_context; serv_context.addWarningListener(       new MyLogListener( ll_out_file ));




========================
http://iihero.8800.org/
========================
Regards,
Sean.
▁▁▁▁▁
▕ █ ██ ▏
▕▔▔ ▔▔\
▕═╭╮══╭╮══
▔╰╯▔▔╰╯▔▔o
▔▔▔▔▔▔▔▔▔▔

ASA10中的Global increment default扩展(Important)

当我使用这个类型的时候:
create table Admin (
  admin_id      bigint default global autoincrement(1000000) primary key,
  data          varchar(30),
  last_modified timestamp default timestamp
);
直接insert into Admin(data) values(1)
失败,原因是没有设置一个选项:
public.Global_database_id的值。
set option public.Global_database_id = 10;
insert into Admin(data) values('21425.34');
select * from Admin;
admin_id,data,last_modified
10000001,'21425.34','2008-02-20 17:09:31.111'
它的起始值从global_database_id * autoincrement区段值 开始,最大增长到autoincrement。
详细说明如下:
The GLOBAL AUTOINCREMENT default is intended for use when multiple databases are used in a SQL Remote replication or MobiLink synchronization environment. It ensures unique primary keys across multiple databases.
This option is similar to AUTOINCREMENT, except that the domain is partitioned. Each partition contains the same number of values. You assign each copy of the database a unique global database identification number. SQL Anywhere supplies default values in a database only from the partition uniquely identified by that database's number.
The partition size can be any positive integer, although the partition size is generally chosen so that the supply of numbers within any one partition will rarely, if ever, be exhausted.
If the column is of type BIGINT or UNSIGNED BIGINT, the default partition size is 232 = 4294967296; for columns of all other types, the default partition size is 216 = 65536. Since these defaults may be inappropriate, especially if your column is not of type INT or BIGINT, it is best to specify the partition size explicitly.
When using this option, the value of the public option global_database_id in each database must be set to a unique, non-negative integer. This value uniquely identifies the database and indicates from which partition default values are to be assigned. The range of allowed values is np + 1 to (n + 1) p, where n is the value of the public option global_database_id and p is the partition size. For example, if you define the partition size to be 1000 and set global_database_id to 3, then the range is from 3001 to 4000.
If the previous value is less than (n + 1) p, the next default value is one greater than the previous largest value in column. If the column contains no values, the first default value is np + 1. Default column values are not affected by values in the column outside of the current partition; that is, by numbers less than np + 1 or greater than p(n + 1). Such values may be present if they have been replicated from another database via MobiLink synchronization.
Because the public option global_database_id cannot be set to a negative value, the values chosen are always positive. The maximum identification number is restricted only by the column data type and the partition size.
If the public option global_database_id is set to the default value of 2147483647, a NULL value is inserted into the column. If NULL values are not permitted, attempting to insert the row causes an error. This situation arises, for example, if the column is contained in the table's primary key.
NULL default values are also generated when the supply of values within the partition has been exhausted. In this case, a new value of global_database_id should be assigned to the database to allow default values to be chosen from another partition. Attempting to insert the NULL value causes an error if the column does not permit NULLs. To detect that the supply of unused values is low and handle this condition, create an event of type GlobalAutoincrement. See Understanding events.
Global autoincrement columns are typically primary key columns or columns constrained to hold unique values (see Enforcing entity integrity).
While using the global autoincrement default in other cases is possible, doing so can adversely affect database performance. For example, in cases where the next value for each column is stored as a 64-bit signed integer, using values greater than 231 - 1 or large double or numeric values may cause wraparound to negative values.
You can retrieve the most recent value inserted into an autoincrement column using the @@identity global variable. For more information, see @@identity global variable.

MobiLink Event pseudo code

 
 
------------------------------------------------------
MobiLink complete event model.
------------------------------------------------------
Legend:
- // This is a comment.
- <name>
    The pseudo code for <name> is listed separately
    in a later section, under a banner:
        ------------------------
        name
        ------------------------
- VariableName <- value
    Assign the given value to the given variable name.
    Variable names are in mixed case.
- event_name
    If you have defined a script for the given event name,
    it will be invoked.
------------------------------------------------------
 

CONNECT to consolidated database
begin_connection_autocommit
begin_connection
COMMIT
for each synchronization request with
     the same script version {
  <synchronize>
}
end_connection
COMMIT
DISCONNECT from consolidated database
 

------------------------------------------------------
synchronize
------------------------------------------------------
 
<authenticate>
<begin_synchronization>
<upload>
<prepare_for_download>
<download>
<end_synchronization>
 

------------------------------------------------------
authenticate
------------------------------------------------------
 
Status <- 1000
UseDefaultAuthentication <- TRUE
if( authenticate_user script is defined ) {
  UseDefaultAuthentication <- FALSE
  TempStatus <- authenticate_user
  if( TempStatus > Status ) {
    Status <- TempStatus
  }
}
 
if( authenticate_user_hashed script is defined ) {
  UseDefaultAuthentication <- FALSE
  TempStatus <- authenticate_user_hashed
  if( TempStatus > Status ) {
    Status <- TempStatus
  }
}
  if( authenticate_parameters script is defined )
 {
    TempStatus <- authenticate_parameters
    if( TempStatus > Status ) {
      Status <- TempStatus
  }
 
if( UseDefaultAuthentication ) {
  if( the user exists in the ml_user table ) {
    if( ml_user.hashed_password column is not NULL ) {
      if( password matches ml_user.hashed_password ) {
        Status <- 1000
      } else {
        Status <- 4000
      }
    } else {
      Status <- 1000
    }
  } else if( -zu+ was on the command line ) {
    Status <- 1000
  } else {
    Status <- 4000
  }
}
if( Status >= 3000 ) {
  // Abort the synchronization.
} else {
  // UserName defaults to MobiLink user name
  // sent from the remote.
  if( modify_user script is defined ) {
    UserName <- modify_user
    // The new value of UserName is later passed to
    // all scripts that expect the MobiLink user name.
  }
}
COMMIT
 
------------------------------------------------------
begin_synchronization
------------------------------------------------------
 
begin_synchronization   // Connection event.
for each table being synchronized {
    begin_synchronization    // Call the table level script.
}
for each publication being synchronized {
  begin_publication
}
COMMIT
 

------------------------------------------------------
end_synchronization
------------------------------------------------------
 
for each publication being synchronized {
  if( begin_publication script was called ) {
    end_publication
  }
}
for each table being synchronized {
  if( begin_synchronization table script was called ) {
    end_synchronization // Table event.
  }
}
if( begin_synchronization table script was called ) {
  end_synchronization     // Connection event.
}
for each table being synchronized {
synchronization_statistics // Table event.
}
synchronization_statistics // Connection event.
for each table being synchronized {
  time_statistics // Table event.
}
time_statistics // Connection event.
 
COMMIT
======================================================
------------------------------------------------------
Events during uploadThe following pseudocode illustrates
how upload events and upload scripts are invoked.
These events take place at the upload location in the
complete event model. See Overview of MobiLink events.
Overview of the upload
------------------------------------------------------
upload
------------------------------------------------------
begin_upload // Connection event
for each table being synchronized {
  begin_upload // Table event
}
  handle_UploadData
  for each table being synchronized {
    begin_upload_rows
    for each uploaded INSERT or UPDATE for this table {
      if( INSERT ) {
        <upload_inserted_row>
      }
      if( UPDATE ) {
        <upload_updated_row>
      }
    }
    end_upload_rows
  }
  for each table being synchronized IN REVERSE ORDER {
    begin_upload_deletes
    for each uploaded DELETE for this table {
      <upload_deleted_row>
    }
    end_upload_deletes
  }
 
For each table being synchronized {
  if( begin_upload table script is called ) {
    end_upload // Table event
  }
}
if( begin_upload connection script was called ) {
  end_upload // Connection event
 
  for each table being synchronized {
    upload_statistics  // Table event.
  }
    upload_statistics  // Connection event.
 
  COMMIT
Upload inserts
------------------------------------------------------
<upload_inserted_row>
------------------------------------------------------
// NOTES:
// - Only table scripts for the current table are involved.
 
  ConflictsAreExpected <- (
       upload_new_row_insert script is defined
    or upload_old_row_insert script is defined
    or resolve_conflict script is defined )
  if( upload_insert script is defined ) {
    upload_insert
  } else if( ConflictsAreExpected
      and upload_update script is not defined
      and upload_insert script is not defined
      and upload_delete script is not defined ) {
      // Forced conflict.
      upload_new_row_insert
      resolve_conflict
  } else {
      // Ignore the insert.
  }
 
Upload updates
------------------------------------------------------
upload_updated_row
------------------------------------------------------
// NOTES:
// - Only table scripts for the current table are involved.
// - Both the old (original) and new rows are uploaded for
//   each update.
 
  ConflictsAreExpected <- (
       upload_new_row_insert script is defined
    or upload_old_row_insert script is defined
    or resolve_conflict script is defined )
  Conflicted <- FALSE
  if( upload_update script is defined ) {
    if( ConflictsAreExpected
      and upload_fetch script is defined ) {
      FETCH using upload_fetch INTO current_row
      if( current_row <> old row ) {
        Conflicted <- TRUE
      }
    }
    if( not Conflicted ) {
      upload_update
    }
  } else if( upload_update script is not defined
      and upload_insert script is not defined
      and upload_delete script is not defined ) {
      // Forced conflict.
      Conflicted <- TRUE
  }
  if( ConflictsAreExpected and Conflicted ) {
    upload_old_row_insert
    upload_new_row_insert
    resolve_conflict
  }
 

Upload deletes
------------------------------------------------------
upload_deleted_row
------------------------------------------------------
// NOTES:
// - Only table scripts for the current table are involved.
 
  ConflictsAreExpected <- (
       upload_new_row_insert script is defined
    or upload_old_row_insert script is defined
    or resolve_conflict script is defined )
  if( upload_delete is defined ) {
    upload_delete
  } else if( ConflictsAreExpected
    and upload_update script is not defined
    and upload_insert script is not defined
    and upload_delete script is not defined ) {
    // Forced conflict.
    upload_old_row_insert
    resolve_conflict
  } else {
    // Ignore this delete.
  }
 
=========================================================
------------------------------------------------------
prepare_for_download
------------------------------------------------------
 
modify_last_download_timestamp
prepare_for_download
if( modify_last_download_timestamp script is defined
    or prepare_for_download script is defined ) {
    COMMIT
}
------------------------------------------------------
download
------------------------------------------------------
 
begin_download
begin_download // Connection event.
for each table being synchronized {
   begin_download // Table event.
}
   handle_DownloadData
   for each table being synchronized {
     begin_download_deletes
     for each row in download_delete_cursor {
       if( all primary key columns are NULL ) {
         send TRUNCATE to remote
       } else {
         send DELETE to remote
       }
     }
     end_download_deletes
     begin_download_rows
     for each row in download_cursor {
       send INSERT ON EXISTING UPDATE to remote
     }
     end_download_rows
   }
   modify_next_last_download_timestamp
   for each table being synchronized {
     if( begin_download table script is called ) {
        end_download // Table event
     }
}
if( begin_download connect script is called ) {
   end_download // Connection event
}
   for each table being synchronized {
     download_statistics   // Table event.
   }
     download_statistics   // Connection event.
 
COMMIT
 

 

ASA10(SQLAnywhere10)中的TIMESTAMP类型(蛮奇怪的)

TIMESTAMP indicates when each row in the table was last modified. When a column is declared with DEFAULT TIMESTAMP, a default value is provided for inserts, and the value is updated with the current date and time whenever the row is updated.

Data type

TIMESTAMP

Remarks

Columns declared with DEFAULT TIMESTAMP contain unique values so that applications can detect near-simultaneous updates to the same row. If the current timestamp value is the same as the last value, it is incremented by the value of the default_timestamp_increment option.
You can automatically truncate timestamp values in SQL Anywhere based on the default_timestamp_increment option. This is useful for maintaining compatibility with other database software that records less precise timestamp values.
The global variable @@dbts returns a TIMESTAMP value representing the last value generated for a column using DEFAULT TIMESTAMP
The main difference between DEFAULT TIMESTAMP and DEFAULT CURRENT TIMESTAMP is that DEFAULT CURRENT TIMESTAMP is set only at INSERT, while DEFAULT TIMESTAMP is set at both INSERT and UPDATE.


2008年2月19日星期二

俺用的ant批处理脚本

不想在环境变量里将%PATH%设置那么长,写了一个ant批处理脚本sean_ant.bat,放到%PATH%路径任一目录当中
@echo off
SETLOCAL
set JAVA_HOME=d:\shared\jdk1.5.0_12
set ANT_HOME=d:\shared\apache-ant-1.7.0
set PATH=%ANT_HOME%\bin;%JAVA_HOME%\bin;%PATH%
ant.bat %*

echo "Finished ant_sean compiling.  "
ENDLOCAL
执行sean_ant.bat -f build.xml <target>即可

Visual Studio C/C++ 编译器选项(Summary)

                          C/C++ 编译器选项
 
                              -优化-
 
/O1 最小化空间                          /Op[-] 改善浮点数一致性
/O2 最大化速度                          /Os 优选代码空间
/Oa 假设没有别名                        /Ot 优选代码速度
/Ob<n> 内联展开(默认 n=0)               /Ow 假设交叉函数别名
/Od 禁用优化(默认值)                    /Ox 最大化选项。(/Ogityb2 /Gs)
/Og 启用全局优化                        /Oy[-] 启用框架指针省略
/Oi 启用内部函数
 
                             -代码生成-
 
/G3 为 80386 进行优化                   /Gh 启用 _penter 函数调用
/G4 为 80486 进行优化                   /GH 启用 _pexit 函数调用
/G5 为 Pentium 进行优化                 /GR[-] 启用 C++ RTTI
/G6 对 PPro、P-II、P-III 进行优化       /GX[-] 启用 C++ EH (与 /EHsc 相同)
/G7 对 Pentium 4 或 Athlon 进行优化     /EHs 启用 C++ EH (没有 SEH 异常)
/GB 为混合模型进行优化(默认)            /EHa 启用 C++ EH(w/ SEH 异常)
/Gd __cdecl 调用约定                    /EHc extern "C" 默认为 nothrow
/Gr __fastcall 调用约定                 /GT 生成纤维安全 TLS 访问
/Gz __stdcall 调用约定                  /Gm[-] 启用最小重新生成
/GA 为 Windows 应用程序进行优化         /GL[-] 启用链接时代码生成
/Gf 启用字符串池                        /QIfdiv[-] 启用 Pentium FDIV 修复
/GF 启用只读字符串池                    /QI0f[-] 启用 Pentium 0x0f 修复
/Gy 分隔链接器函数                      /QIfist[-] 使用 FIST 而不是 ftol()
/GZ 启用堆栈检查(/RTCs)                 /RTC1 启用快速检查(/RTCsu)
/Ge 对所有函数强制堆栈检查              /RTCc 转换为较小的类型检查
/Gs[num] 控制堆栈检查调用               /RTCs 堆栈帧运行时检查
/GS 启用安全检查                        /RTCu 未初始化的本地用法检查
/clr[:noAssembly] 为公共语言运行库编译
    noAssembly - 不产生程序集
/arch:<SSE|SSE2> CPU 结构的最低要求,以下内容之一:
    SSE - 启用支持 SSE 的 CPU 可用的指令
    SSE2 - 启用支持 SSE2 的 CPU 可用的指令
 
                              -输出文件-
 
/Fa[file] 命名程序集列表文件            /Fo<file> 命名对象文件
/FA[sc] 配置程序集列表                  /Fp<file> 命名预编译头文件
/Fd[file] 命名 .PDB 文件                /Fr[file] 命名源浏览器文件
/Fe<file> 命名可执行文件                /FR[file] 命名扩展 .SBR 文件
/Fm[file] 命名映射文件
 
                              -预处理器-
 
/AI<dir> 添加到程序集搜索路径           /Fx 将插入的代码合并到文件
/FU<file> 强制使用程序集/模块           /FI<file> 命名强制包含文件
/C 不抽出注释                           /U<name> 移除预定义宏
/D<name>{=|#}<text> 定义宏              /u 移除所有预定义宏
/E 预处理到 stdout                      /I<dir> 添加到包含搜索路径
/EP 预处理到 stdout,没有 #line         /X 忽略"标准位置"
/P 预处理到文件
 
                                -语言-
 
/Zi 启用调试信息                        /Ze 启用扩展(默认)
/ZI 启用"编辑并继续"调试信息          /Zl 省略 .OBJ 中的默认库名
/Z7 启用旧式调试信息                    /Zg 生成函数原型
/Zd 仅有行号调试信息                    /Zs 只进行语法检查
/Zp[n] 在 n 字节边界上包装结构          /vd{0|1} 禁用/启用 vtordisp
/Za 禁用扩展(暗指 /Op)                  /vm<x> 指向成员的指针类型
/Zc:arg1[,arg2] C++ 语言一致性,这里的参数可以是:
    forScope - 对范围规则强制使用标准 C++
    wchar_t - wchar_t 是本机类型,不是 typedef
 
                              - 杂项 -
 
@<file> 选项响应文件                    /wo<n> 发出一次警告 n
/?, /help 打印此帮助消息                /w<l><n> 为 n 设置警告等级 1-4
/c 只编译,不链接                       /W<n> 设置警告等级(默认 n=1)
/H<num> 最大外部名称长度                /Wall 启用所有警告
/J 默认 char 类型是 unsigned            /Wp64 启用 64 位端口定位警告
/nologo 取消显示版权消息                /WX 将警告视为错误
/showIncludes 显示包含文件名            /WL 启用单行诊断
/Tc<source file> 将文件编译为 .c        /Yc[file] 创建 .PCH 文件
/Tp<source file> 将文件编译为 .cpp      /Yd 将调试信息放在每个 .OBJ 中
/TC 将所有文件编译为 .c                 /Yl[sym] 为调试库插入 .PCH 引用
/TP 将所有文件编译为 .cpp               /Yu[file] 使用 .PCH 文件
/V<string> 设置版本字符串               /YX[file] 自动 .PCH
/w 禁用所有警告                         /Y- 禁用所有 PCH 选项
/wd<n> 禁用警告 n                       /Zm<n> 最大内存分配(默认为 %)
/we<n> 将警告 n 视为错误
 
                                 -链接-
 
/MD 与 MSVCRT.LIB 链接                  /MDd 与 MSVCRTD.LIB 调试库链接
/ML 与 LIBC.LIB 链接                    /MLd 与 LIBCD.LIB 调试库链接
/MT 与 LIBCMT.LIB 链接                  /MTd 与 LIBCMTD.LIB 调试库链接
/LD 创建 .DLL                           /F<num> 设置堆栈大小
/LDd 创建 .DLL 调试库                   /link [链接器选项和库]

JVM specification Notes(1)

jvm 规范:
jvm是一个抽象机。与具体实现没有直接联系。
jvm支持两种数据类型:原生类型以及引用类型
floating-point,参考ieee754标准."NaN" - Not a Number.
boolean型,包括关系操作以及逻辑操作.它与其它原生类型不存在直接转换
引用类型:有三种,class类型,接口类型,以及数组类型,对象是动态创建的类或数组实例.
类实例创建的两个途径:newInstance通过Class,或通过class实例的创建表达式
对象在heap中创建,在没有针对它的引用时,被垃圾回收。对象占用的空莘不能通过显示的语言调用进行回收。
任何一个对某对象的引用,进而改变该对象,新的状态会让所有引用该对象的引用看到.
每个对象都有一个相联锁(associated lock),使用synchronized methods来表示,或者synchronized语句.
引用类型形成一个层次结构. Object类型可以拥有任意对象的引用(类实例或者数组的实例)
变量类型:
 类变量, 在class声明里采用static关键字,或在interface声明里可不带static关键字.这种类型变量在类或接口装载的时候就被初始化为默认值,当类unload时,这些变量自然会exit.
 实例变量,在class声明当中,不带关键字static,如果class T有一个实例变量a,则a会在每个T类型或其子类新创建的T实例中创建,当它所对应的对象不再被引用,在所有必要的终结操作完成以后,它就会退出生命期。
 Array,未使名变量,无论新对象何时创建,初始化时都带有默认值。当它不再被引用时,则生命期结束。
 传递给方法的参数变量值,对方法声明中的每个参数,新的参数变量会在方法调用的时候创建. 当方法体结束时,参数变量生命期结束。
 构造函数参数,与方法参数变量雷同
 异常句柄参数变量,在catch块结束时,生命期结束
 局部变量,只有在它被执行的时候才被初始化.
变量的初始化值:
 class, 实例变量,array,初始化成它创建时候的默认值 
 byte, short, int, long(0L), float(0.0f), double(0.0.), char('\0000')
 boolean(false)
 reference type: null
 method parameter, 与对应的参数类型一致
 local变量,在使用之前,必须显示地给定一个值或者初始化或者赋值.
    type是一个编译期概念,而变量或者表达式都有自己的type,对象或数组没有type,但是拥有自己的class.
    每个array都有自己的class, array的class名字比较奇怪,并且不是合法的标识,一个int数组,名字为"[I".
    jvm退出:在下面两个条件之一成立时
 所有非daemon线程终结时
 某些线程调用Runtime或者System的exit方法,并且exit被security manager允许时.
        当然,可以调用System的runFinalizersOnExit(true)方法强制在exit之前执行所有 class的 finalize方法.默认情况下,是不会调用的.
chapter 3.
    类型和值的返回地址, returnAddress由jsr, ret, jsr_w指令使用.指向jvm指令的opcode的指针.
    boolean,没有专门的类型来处理,直接使用int来代替,而对boolean数组,则使用指令baload和bastore来处理.jvm使用1表示true, 0表示false.
    reference type.
    运行时数据区,3.5.1(***)

Out of memory systems that use many threads java(Summary)

I recently came across this exception on a couple of java systems that use many threads  java.lang.OutOfMemoryError: unable to create new native thread. The strange thing was that the JVM had been assigned a lot of memory (1.5GB) and that it had at least half the memory available. Michele found this article that points out that the more memory you give to the JVM the more likely you are to get java.lang.OutOfMemoryError: unable to create new native thread exceptions when you have many threads.

Which makes perfect sense when you think about it. Each 32 bit process on Windows has 2GB "available" memory as 2GB is reserved to Windows. In my case the JVM grabbed 1.5 GB leaving 500MB. Part of the 500MB was used to map system dlls etc in memory so less than 400 MB was left. Now to the crucial point: When you create a thread in java it creates a Thread object in the JVM memory but it also creates a operating system thread. The operating system creates the thread with a thread stack in the 400MB that is left, not in the 1.5 GB allocated in the JVM. Java 1.4 uses a default stack size of 256kb but Java 1.5 uses a 1MB stack per thread. So, in the 400MB left to process I could only generate ~400 threads. Absurd but true: to create more threads you have to reduce the memory allocated to the JVM. Another option is to host the JVM in your own process using JNI.

This formula gives a decent estimate for the number of threads you can create:
(MaxProcessMemory - JVMMemory - ReservedOsMemory) / (ThreadStackSize) = Number of threads

For Java 1.5 I get the following results assuming that the OS reserves about 120MB:
1.5GB allocated to JVM: (2GB-1.5Gb-120MB)/(1MB) = ~380 threads
1.0GB allocated to JVM: (2GB-1.0Gb-120MB)/(1MB) = ~880 threads

Java 1.4 uses 256kb for the thread stack which lets you create a lot more threads:
1.5GB allocated to JVM: ~1520 threads
1.0GB allocated to JVM: ~3520 threads

I have not tried the 3GB switch but it should in theory let you create more threads.

crontab使用简记

Crontab - Quick reference

Setting up cronjobs in Unix and Solaris

cron is a unix, solaris utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon. These tasks are often termed as cron jobs in unix , solaris.

Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at specified times.

 

Following points sum up the crontab functionality :

1. Crontab Restrictions

2. Crontab Commands

3. Crontab file - syntax

4. Crontab Example

5. Crontab Environment

6. Disable Email

7. Generate log file for crontab activity

8. Next Steps

 

1. Crontab Restrictions

____________

You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does not exist, you can use

crontab if your name does not appear in the file /usr/lib/cron/cron.deny.

If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root user can use crontab. The allow/deny files consist of one user name per line.

 

 

2. Crontab Commands

__________

export EDITOR=vi ;to specify a editor to open crontab file.

 

crontab -e     Edit your crontab file, or create one if it doesn't already exist.

crontab -l      Display your crontab file.

crontab -r      Remove your crontab file.

crontab -v      Display the last time you edited your crontab file. (This option is only available on a few systems.)

 

 

3. Crontab file

___________

Crontab syntax :-

A crontab file has five fields for specifying day , date and time  followed by the command to be run at that interval.*     *   *   *    *  command to be executed

-     -    -    -    -

|     |     |     |     |

|     |     |     |     +----- day of week (0 - 6) (Sunday=0)

|     |     |     +------- month (1 - 12)

|     |     +--------- day of month (1 - 31)

|     +----------- hour (0 - 23)

+------------- min (0 - 59)

 

 

 

* in the value field above means all legal values as in braces for that column.

The value column can have a * or a list of elements separated by commas. An element is either a number in the ranges shown above or two numbers in the range separated by a hyphen (meaning an inclusive range).

 

Note: The specification of days can be made in two fields: month day and weekday. If both are specified in an entry, they are cumulative meaning both of the entries will get executed .

 

4. Crontab Example

_______

 

A line in crontab file like below  removes the tmp files from /home/someuser/tmp each day at 6:30 PM.

 

30     18     *     *     *         rm /home/someuser/tmp/*

 

 

 

Changing the parameter values as below will cause this command to run at different time schedule below :min     hour            day/month        month   day/week         Execution time

30        0          1          1,6,12  *          -- 00:30 Hrs  on 1st of Jan, June & Dec.

 

:

0          20        *          10        1-5       --8.00 PM every weekday (Mon-Fri) only in Oct.

 

:

0          0          1,10,15            *          *          -- midnight on 1st ,10th & 15th of month

 

:

5,10     0          10        *          1          -- At 12.05,12.10 every Monday & on 10th of every month

:

 

 

Note : If you inadvertently enter the crontab command with no argument(s), do not attempt to get out with Control-d. This removes all entries in your crontab file. Instead, exit with Control-c.

 

5. Crontab Environment

___________

cron invokes the command from the user's HOME directory with the shell, (/usr/bin/sh).

cron supplies a default environment for every shell, defining:

HOME=user's-home-directory

LOGNAME=user's-login-id

PATH=/usr/bin:/usr/sbin:.

SHELL=/usr/bin/sh

 

Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a script called by the entry.

 

6. Disable Email

____________

 

By default cron jobs sends a email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line .

 

>/dev/null 2>&1

 

7. Generate log file

________________

 

To collect the cron execution execution log in a file :

 

30 18  *    *   *    rm /home/someuser/tmp/* > /home/someuser/cronlogs/clean_tmp_dir.log

8. Next Steps

 

This  article  covered  a significant aspect of system administration of setting up cronjobs . Unix administration involves lots of different tasks and some of these tasks are covered in this website  but still there are many areas not covered here .

 

Following books available for online buying from Amazon.com . You should have following two books in your bookshelf  for ready reference  if you are involved in Unix system administration  .

发现XP当中一个有用的命令,用于删除和查询系统中的服务名

输入命令SC:
DESCRIPTION:
        SC is a command line program used for communicating with the
        NT Service Controller and services.
USAGE:
        sc <server> [command] [service name] <option1> <option2>...
 
        The option <server> has the form "\\ServerName"
        Further help on commands can be obtained by typing: "sc [command]"
        Commands:
          query-----------Queries the status for a service, or
                          enumerates the status for types of services.
          queryex---------Queries the extended status for a service, or
                          enumerates the status for types of services.
          start-----------Starts a service.
          pause-----------Sends a PAUSE control request to a service.
          interrogate-----Sends an INTERROGATE control request to a service.
          continue--------Sends a CONTINUE control request to a service.
          stop------------Sends a STOP request to a service.
          config----------Changes the configuration of a service (persistant).
          description-----Changes the description of a service.
          failure---------Changes the actions taken by a service upon failure.
          qc--------------Queries the configuration information for a service.
          qdescription----Queries the description for a service.
          qfailure--------Queries the actions taken by a service upon failure.
          delete----------Deletes a service (from the registry).
          create----------Creates a service. (adds it to the registry).
          control---------Sends a control to a service.
          sdshow----------Displays a service's security descriptor.
          sdset-----------Sets a service's security descriptor.
          GetDisplayName--Gets the DisplayName for a service.
          GetKeyName------Gets the ServiceKeyName for a service.
          EnumDepend------Enumerates Service Dependencies.
 
        The following commands don't require a service name:
        sc <server> <command> <option>
          boot------------(ok | bad) Indicates whether the last boot should
                          be saved as the last-known-good boot configuration
          Lock------------Locks the Service Database
          QueryLock-------Queries the LockStatus for the SCManager Database
EXAMPLE:
        sc start MyService

应用python-sybase访问ASE数据库(一)

Python2.5提供了标准的PEP 249 -- Python Database API Specification v2.0,可以使用基本一致的接口来访问各种类型的数据库,对应于ASE数据库,则有开源的python-sybase实现了该接口,从Web站点http://python-sybase.sourceforge.net上可以下载到该模块。

1.1       安装python-sybase模块

首先,确保系统中已经安装好了Python,并且也安装了Sybase ASE客户端,它带有OpenClient,从环境变量%SYBASE%以及%SYBASE_OCS%可以判断得到(linux/unix下是$SYBASE$SYBASE_OCS)

另外,由于安装过程需要对其中的C程序进行编译和链接,Windows平台需要拥有Microsoft Visual Studio .NET 2003\Vc7来进行编译。

http://downloads.sourceforge.net/python-sybase/python-sybase-0.38.tar.gz下载到python-sybase的模块压缩包,将其解压至目录:python-sybase-0.38,发现其目录结构如下:

E:\LEARN\PYTHON\REF\PYTHON-SYBASE-0.38

├─build

  ├─lib.win32-2.5

  └─temp.win32-2.5

      └─Release

├─doc

├─*.c, *.py, *.h

└─examples

一般情况下,直接运行python setup.py install,直接就完成了python-sybase模块的安装,实际情况,可能有些不太一样,从它的网站上的平台支持列表,我们可以看到,目前在Windows平台上,似乎不支持Openclient15.0.x

10-1 python-sybase支持平台及ASE版本

Client

Server

OS

Libraries

OS

Libraries

Linux

Sybase ASE 15.0.1 (32bits)

Linux

Sybase ASE 15.0.1

Linux 64-bits

OpenClient 12.5

Linux

Sybase ASE 12.5

Linux

Sybase ASE 12.5 (32bits)

Linux

Sybase ASE 12.5

Linux

Sybase ASE 11.9.2

Linux

Sybase ASE 11.9.2

Linux

Sybase ASE 11.0.3

Linux

Sybase ASE 11.0.3

Linux

FreeTDS

Linux

Sybase ASA 9.0.2

Mac OS X 10.4.x

FreeTDS 0.62

Linux

Sybase ASE 12.5

Windows 2000 Prof SP1

Sybase ASE 11.9.2

Windows NT 4.0

Sybase ASE 11.9.2

Windows NT 4.0 SP6

Sybase ASE 11.9.2

HPUX 10.20

Sybase ASE 11.9.2

Windows 98

Sybase ASE 11.9.2

NT 4.0 SP6

OpenClient 11.5

Windows 95 OSR2

Sybase ASE 11.9.2

Solaris 2.6

OpenClient 11.5

Solaris 10

Sybase ASE 15.0.1 (32bits)

Solaris 10

Sybase ASE 15.0.1

Solaris 10

Sybase ASE 12.5 (32bits)

Solaris 10

Sybase ASE 12.5

Solaris 8

Sybase ASE 15.0.1 (32bits)

Solaris 8

Sybase ASE 15.0.1

Solaris 8

Sybase ASE 12.5 (32bits)

Solaris 8

Sybase ASE 12.5

SunOS 5.9

OpenClient 12.0

SunOS 5.9

Sybase ASE 12.5

Solaris 2.6

Sybase ASE 11.5.1

Solaris 8

OpenClient 11.5

Solaris 5.6

Sybase ASE 11.0.3

 

 

AIX 5.3

Sybase ASE 15.0.1 (32bits)

AIX 5.3

Sybase ASE 15.0.1

AIX 5.2

Sybase ASE 12.5 (32bits)

AIX 5.2

Sybase ASE 12.5.1

IRIX 6.5

11.5.1

 

 

HP-UX 11

11.5.1

 

 

 

要支持Sybase ASE15.0及以上版本,需要手动修改setup.py中的有关代码。

我们从中找到下面的代码块:

elif os.name == 'nt':                   # win32

    # Not sure how the installation location is specified under NT

    if sybase is None:

        sybase = r'i:\sybase\sql11.5'

        if not os.access(sybase, os.F_OK):

            sys.stderr.write(

                'Please define the Sybase installation directory in'

                'the SYBASE environment variable.\n')

            sys.exit(1)

    syb_libs = ['libblk', 'libct', 'libcs']

   

将其中的syb_libs = ['libblk', 'libct', 'libcs']替换为syb_libs = ['libsybblk', 'libsybct', 'libsybcs']即可。原因是在ASE15.0及以上版本,这三个动态库的名字都发生了变化。我在Sybase ASE15.0下安装这个包时发现了此问题。15.0以下版本,无需修改该文件就可以安装。

安装的命令是:python setup.py install

安装完以后,在python目录下边,我们会发现有下述安装好的文件:

Lib\site-packages\Sybase.py

Lib\site-packages\python_sybase-0.38-py2.5.egg-info

Lib\site-packages\sybasect.pyd