Closed QuantLab closed 9 years ago
libtool --mode=execute vowpalwabbit/vw -h
vw.0.dylib is a dynamically linked library, so it has to be on your PATH for the OS to find it. libtool --mode=execute
does all of the necessary foo in order to execute vw
without you having to munge PATH by hand.
When I type libtool --mode=execute vowpalwabbit/vw -h
I get:
error: libtool: unknown option character
-' in: --mode=execute
Usage: libtool -static [-] file [...] [-filelist listfile[,dirname]] [-arch_only arch] [-sacLT] [-no_warning_for_no_symbols]
Usage: libtool -dynamic [-] file [...] [-filelist listfile[,dirname]] [-arch_only arch] [-o output] [-install_name name] [-compatibility_version #] [-current_version #] [-seg1addr 0x#] [-segs_read_only_addr 0x#] [-segs_read_write_addr 0x#] [-seg_addr_table
the reason for this behavior may be different libtool libraries on mac os x:
https://stackoverflow.com/questions/22677133/installation-of-libtool
2.
So next I try glibtool --mode=execute vowpalwabbit/vw -h
which return the original issue :
dyld: Library not loaded: /usr/local/lib/libvw.0.dylib Referenced from: /Users/Playtaw1n/SourceSoft/vowpal_wabbit/vowpalwabbit/.libs/vw Reason: image not found Trace/BPT trap: 5
ps. May it be that /usr/local/lib/libvw.0.dylib wasn't installed in the first place, cause I can't find it neither in usr/local/bin nor anywhere else through finder.
The /usr/local/lib/libvw.0.dylib
location for the shared library is an artifact of the autotools
.
In the vowpal_wabbit directory, did you try:
./libtool --mode=execute vowpalwabbit/vw -h
Here's my output:
% ./libtool --mode=execute vowpalwabbit/vw -h
Num weight bits = 18
learning rate = 0.5
initial_t = 0
power_t = 0.5
using no cache
Reading datafile =
num sources = 1
VW options:
--random_seed arg seed random number generator
--ring_size arg size of example ring
Update options:
-l [ --learning_rate ] arg Set learning rate
--power_t arg t power value
--decay_learning_rate arg Set Decay factor for learning_rate
between passes
--initial_t arg initial t value
--feature_mask arg Use existing regressor to determine
which parameters may be updated. If no
initial_regressor given, also used for
initial weights.
Weight options:
-i [ --initial_regressor ] arg Initial regressor(s)
--initial_weight arg Set all weights to an initial value of
arg.
--random_weights arg make initial weights random
--input_feature_regularizer arg Per feature regularization input file
Parallelization options:
--span_server arg Location of server for setting up
spanning tree
--unique_id arg unique id used for cluster parallel
jobs
--total arg total number of nodes used in cluster
parallel job
--node arg node number in cluster parallel job
Diagnostic options:
--version Version information
-a [ --audit ] print weights of features
-P [ --progress ] arg Progress update frequency. int:
additive, float: multiplicative
--quiet Don't output disgnostics and progress
updates
-h [ --help ] Look here: http://hunch.net/~vw/ and
click on Tutorial.
Feature options:
--hash arg how to hash the features. Available
options: strings, all
--ignore arg ignore namespaces beginning with
character <arg>
--keep arg keep namespaces beginning with
character <arg>
--redefine arg redefine namespaces beginning with
characters of string S as namespace N.
<arg> shall be in form 'N:=S' where :=
is operator. Empty N or S are treated
as default namespace. Use ':' as a
wildcard in S.
-b [ --bit_precision ] arg number of bits in the feature table
--noconstant Don't add a constant feature
-C [ --constant ] arg Set initial value of constant
--ngram arg Generate N grams. To generate N grams
for a single namespace 'foo', arg
should be fN.
--skips arg Generate skips in N grams. This in
conjunction with the ngram tag can be
used to generate generalized
n-skip-k-gram. To generate n-skips for
a single namespace 'foo', arg should be
fN.
--feature_limit arg limit to N features. To apply to a
single namespace 'foo', arg should be
fN
--affix arg generate prefixes/suffixes of features;
argument '+2a,-3b,+1' means generate
2-char prefixes for namespace a, 3-char
suffixes for b and 1 char prefixes for
default namespace
--spelling arg compute spelling features for a give
namespace (use '_' for default
namespace)
--dictionary arg read a dictionary for additional
features (arg either 'x:file' or just
'file')
-q [ --quadratic ] arg Create and use quadratic features
--q: arg : corresponds to a wildcard for all
printable characters
--cubic arg Create and use cubic features
Example options:
-t [ --testonly ] Ignore label information and just test
--holdout_off no holdout data in multiple passes
--holdout_period arg holdout period for test only, default
10
--holdout_after arg holdout after n training examples,
default off (disables holdout_period)
--early_terminate arg Specify the number of passes tolerated
when holdout loss doesn't decrease
before early termination, default is 3
--passes arg Number of Training Passes
--initial_pass_length arg initial number of examples per pass
--examples arg number of examples to parse
--min_prediction arg Smallest prediction to output
--max_prediction arg Largest prediction to output
--sort_features turn this on to disregard order in
which features have been defined. This
will lead to smaller cache sizes
--loss_function arg (=squared) Specify the loss function to be used,
uses squared by default. Currently
available ones are squared, classic,
hinge, logistic and quantile.
--quantile_tau arg (=0.5) Parameter \tau associated with Quantile
loss. Defaults to 0.5
--l1 arg l_1 lambda
--l2 arg l_2 lambda
Output model:
-f [ --final_regressor ] arg Final regressor
--readable_model arg Output human-readable final regressor
with numeric features
--invert_hash arg Output human-readable final regressor
with feature names. Computationally
expensive.
--save_resume save extra state so learning can be
resumed later with new data
--save_per_pass Save the model after every pass over
data
--output_feature_regularizer_binary arg
Per feature regularization output file
--output_feature_regularizer_text arg Per feature regularization output file,
in text
Output options:
-p [ --predictions ] arg File to output predictions to
-r [ --raw_predictions ] arg File to output unnormalized predictions
to
Reduction options, use [option] --help for more info:
--bootstrap arg k-way bootstrap by online importance
resampling
--search arg Use learning to search,
argument=maximum action id or 0 for LDF
--cbify arg Convert multiclass on <k> classes into
a contextual bandit problem
--cb arg Use contextual bandit learning with <k>
costs
--csoaa_ldf arg Use one-against-all multiclass learning
with label dependent features. Specify
singleline or multiline.
--wap_ldf arg Use weighted all-pairs multiclass
learning with label dependent features.
Specify singleline or multiline.
--csoaa arg One-against-all multiclass with <k>
costs
--multilabel_oaa arg One-against-all multilabel with <k>
labels
--log_multi arg Use online tree for multiclass
--ect arg Error correcting tournament with <k>
labels
--oaa arg One-against-all multiclass with <k>
labels
--top arg top k recommendation
--binary report loss as binary classification on
-1,1
--link arg (=identity) Specify the link function: identity,
logistic or glf1
--stage_poly use stagewise polynomial feature
learning
--lrq arg use low rank quadratic features
--autolink arg create link function with polynomial d
--new_mf arg rank for reduction-based matrix
factorization
--nn arg Sigmoidal feedforward network with <k>
hidden units
--active enable active learning
--bfgs use bfgs optimization
--conjugate_gradient use conjugate gradient based
optimization
--lda arg Run lda with <int> topics
--noop do no learning
--print print examples
--rank arg rank for matrix factorization.
--sendto arg send examples to <host>
--svrg Streaming Stochastic Variance Reduced
Gradient
--ftrl Follow the Regularized Leader
--ksvm kernel svm
Gradient Descent options:
--sgd use regular stochastic gradient descent
update.
--adaptive use adaptive, individual learning
rates.
--invariant use safe/importance aware updates.
--normalized use per feature normalized updates
--sparse_l2 arg (=0) use per feature normalized updates
Input options:
-d [ --data ] arg Example Set
--daemon persistent daemon mode on port 26542
--port arg port to listen on; use 0 to pick unused
port
--num_children arg number of children for persistent
daemon mode
--pid_file arg Write pid file in persistent daemon
mode
--port_file arg Write port used in persistent daemon
mode
-c [ --cache ] Use a cache. The default is
<data>.cache
--cache_file arg The location(s) of cache_file.
-k [ --kill_cache ] do not reuse existing cache: create a
new one always
--compressed use gzip format whenever possible. If a
cache file is being created, this
option creates a compressed cache file.
A mixture of raw-text & compressed
inputs are supported with
autodetection.
--no_stdin do not default to reading from stdin
You want to use the libtool
that is automagically generated for the project, i.e., the one in the vowpal_wabbit directory.
What's the output when you invoke the following:
$ file vowpalwabbit/vw
It should be a shell script. If not, there's something more interesting going on in your development environment.
Last time I tried vw libtool also ./libtool --mode=execute vowpalwabbit/vw -h
the result was the same as with glibtool :
dyld: Library not loaded: /usr/local/lib/libvw.0.dylib Referenced from: /Users/Playtaw1n/SourceSoft/vowpal_wabbit/vowpalwabbit/.libs/vw Reason: image not found Trace/BPT trap: 5
the output of file vowpalwabbit/vw
is vowpalwabbit/vw: POSIX shell script text executable
Inconsistency in this env?
dyld: Library not loaded: /usr/local/lib/libvw.0.dylib Referenced from: /Users/Playtaw1n/SourceSoft/vowpal_wabbit/vowpalwabbit/.libs/vw Reason: image not found Trace/BPT trap: 5
Seems to mix the install location (/usr/local/lib
) and the work/build location: (vowpalwabbit/...
)
I'm not familiar with Mac-OS dynamic loader (nor with libtool for that matter) but this sounds like if the file under /usr/local/
is missing, it should be installed, no? Maybe a chicken and egg issue?
On Linux you'd normally just run sudo ldconfig /some/new/library-directory
once if a non-standard library-directory gets added, and the dynamic loader becomes aware of new location.
"...but this sounds like if the file under /usr/local/ is missing, it should be installed, no?" I think so too ;) but than there is a question - should it be there after vw installation? if so, why it is not there? By the way this is the link to my make process https://github.com/JohnLangford/vowpal_wabbit/issues/578
Not necessarily. The linker has to embed some path to the shared library so that the runtime linker (rtld) knows where to find it. By default, autotools
uses the standard install prefix, /usr/local (unless you override it with the --prefix
argument to configure
).
But it sounds as if your development environment is TFU-ed in some way. Not sure offhand what it could be. It "just works" for me.
Hmm, that is strange because my development environment is almost virgin ;)
But still after correct installation shouldn't libvw.0.dylib appear somewhere in the system?
Or is it possible for installer to not fully build software but pretend like everything is ok?
Have you tried to build using make
only from a pristine tree (i.e. no autogen.sh/automake etc.)?
Ok, I've solved it. For people with little or no experience in building from source ( like me ;) ) you really should add to instructions one last step - sudo make install
Lol
PS. Thanks for participation.
PSS. One more question - do I now actually need the source folder? or is it advised to make clean
after installation?
@Playtaw1n: You shouldn't have had to make install
-- there's something else going on in your environment that is different from the rest of us.
do I now actually need the source folder? or is it advised to make clean after installation?
The two are not mutually exclusive.
Ultimately, it is your choice. The source tree has some goodies that aren't being installed by default, e.g. some useful utilities under utl
, some nice demos under demo
. However, if you only need vw
, you don't need to keep the source tree. As for make clean
: It will release some disk space due to removal of compiled binaries, but note that the .git
repo itself, with all the history is currently the biggest space consumer @ ~129M. Way bigger than the compiled binaries.
@Playtaw1n, @arielf: What I meant was that @Playtaw1n shouldn't have to make install
in order to run vowpalwabbit/vw
in-place. I have a fairly mature development environment on 10.8 (I can't upgrade to 10.9 because it hasn't been blessed by internal security). I can run vowpalwabbit/vw
in-place without make install
. Hence my skepticism that make install
is required.
@bscottm yes, I understood you, and I agree. I was just responding to @Playtaw1n other Qs :-)
@bscottm Given installation instructions here I thought the same, but for some reason it didn't work for me. However in discovered INSTAL text file inside cloned vw folder it's written to make install
, also browsing the web in search for solution some people also mentioned make install
in their experience, but maybe it was a must for earlier versions, i don't know.
Hi ! I got this when tried to play around for the first time.