Discussion:
Reintroducing fish, the friendly interactive shell
(too old to reply)
l***@gmail.com
2006-03-06 17:49:29 UTC
Permalink
Hi,

A little more than a year ago, I posted a message to this group about
fish, a new shell I'd written. (See
http://groups.google.se/group/comp.unix.shell/browse_frm/thread/d4ede40181eb5001/7ba776d029e871dd?q=fish&rnum=1#7ba776d029e871dd)

At the time, fish was what I would describe as a program with some nice
interactive features, and a quite lousy language syntax. Much of the
focus of the following discussion on this list focused on the syntax,
not on the new interactive features, such as syntax highlighting,
advanced tab completions, etc.. The new UI features had been the main
focus of my efforts so far, which is why the shell syntax was in the
state it was. In retrospect, I did not make it clear that while there
where some interesting new syntax ideas, the syntax itself should have
been considered a placeholder. It should also be noted that during the
course of the original conversation, it was demonstrated to me that zsh
already contained several of the features that I thought where specific
to fish, only they where not enabled by default and the way to enable
them was not obvious to me. Fish still has plenty of unique UI
features, though.

Since then, I've spent a large amount of time hacking on fish, and many
others have also helped out with patches and opinions and counter
arguments, and the fish shell is very different from how it was one
year ago. Most of these changes are related to the fish syntax. When
rewriting the fish syntax, I had the choice of either using a
Posix-like syntax, using the somewhat less broken rc syntax or doing
something completely new. Given the number of bike sheds in the world,
and the well known NIH-syndrome, it should not surprise anyone that I
chose the latter option, though I have tried to use the Posix syntax in
all places where I do not think it is completely broken. There are a
small number of changes that mostly change something that is not
really, broken, where I simply thought the new syntax to be a bit
better. Perhaps these where mistakes, I am unsure. The most noticable
such change is the use of () instead of $() for command substitutions.
Still, I hope that when reading up on the fish syntax, people will
agree with me that overall going for a new syntax was the right thing
to do. I am very curious about what the people in this group think
about fish the way it works now. Here are the main syntax features of
fish compared to Posix-like shells like bash:



No Macro language pretence:

Two of the most common issues with Posix shells I've seen are caused by
the roots of shells as macro expansion languages, even though at least
bash is not internally implemented as a macro expansion language. The
first of these is the ever present requirement of using quotes around
variables to avoid argument separation on spaces. This is just silly,
it is almost never what you want, and in the few rare cases where it
is, it can be achived with a command substitution and a pipe through
tr. So fish never separates arguments on spaces in variable values.

The second such common example is when doing something like:

bash> FOO=BAR
bash> FOO=BAZ; echo $FOO
BAR

This makes a lot of sense if you consider the shell as a macro
expansion language that works on a line-by-line basis, but that does
not make it a good syntax.

The fact that bash has fixed the second of these two misfeatures, but
not the first just makes things worse in my opinion, since it makes it
clear that bash is not a macro expansion language, it is simply a
language that pretendes to be one. In my opinion, it makes the language
unpredictable, since you never know without testing if bash will behave
like a macro exapnder or not without trying.

If I'm not mistaken (I often am when it comes to zsh, I'm sorry in
advance for all the misrepresentations of zsh that I'm sure exist in
this post), zsh allows you to choose how it behaves in both these
cases. This is a recuring theme with zsh, at least it seems that way to
me. The philosophy of always answering the question 'Which of these two
paths should we choose?' with 'Both!' is suboptimal as I see it. I
don't think that it is a good idea to make the syntax of programming
languages configurable. If you do, you can't safely run a piece of
shellscript without checking that the scripter used the same settings
as you do. Also, the zsh approach means that everyone has to take the
time to tweak every little detail of the shell to their liking (since
the defaults are rarely very good), and that if they are stuck on a
foreign host, they will be lost since everything will be tweaked all
wrong. In my opinion, it is often better to choose one way of doing
something and sticking with it. If it turns out to be the wrong way (A
conclusion that should _never_ be reached without a great deal of
thought), implement a new one and eventually _remove_ the old, bad way
of doing it.



Scoping:

Posix shells have no notion of local variables, and instead use
subshells to get something more or less like read-only variables in
some situations. Unless I'm mistaken, bash allows you to create local
variables as an extension, but the main way of providing variable scope
is still to use subshells, either through () or using command
substitution or simply by using a pipeline. It should also be noted
that Posix does not define the behaviour of pipelines with regards to
forking of subshells, and that bash and zsh do this differently.
Specifically, the following works in zsh but is meaningless in bash:

cat file.txt|read contents

In fish all variables have scope. By default, when creating a new
variable the scope is local to the currently running function, or if no
function is running, the scope is global. You can explicitly set the
scope of a variable using -g (global) or -l (local to the current block
of commands). There are _no_ subshells in fish, i.e. fish never forks
of a subprocess other than to execute a command in it. This means that
you can freely change variable values inside command substitutions and
pipes, and these changes will be visible in the shell proper.

I think that while some people may be used to them, the traditional
shell scoping rules are both error prone (since large shellscripts will
often have problems with variable name clashes because most variables
end up as globals) and limiting (since you often can not rely on
variable writes being permanent - if you do you will limit your
function to never be used in pipelines or in command substitutions),
and that the type of scoping provided in fish, as well as nearly every
other procedural or object oriented language on the planet, is often
both safer and more powerful. As an extra bonus, many people use other
languages with similar scoping rules to fish, meaning that they will
already be familiar with the fish scoping rules.



Block commands:

Fish changes the way blocks of code are defined. Examples:

if foo; then bar; fi -> if foo; bar; end
case $foo; a) bar;; *) baz;; esac -> switch $foo; case a; bar;
case '*'; baz; end
foo(){ bar; } -> function foo; bar; end

You will notice that 'fi', 'esac', 'done', '}' and all the other block
end commands have been replaced by 'end'. The updated block syntax is
inspired by Matlab and Lua. The really major benefit, in my opinion, is
with function definitions. Quickly now, which of the following are
legal function definitions?

hello () {echo hello }
hello () {echo hello;}
hello () {;echo hello }
hello () {;echo hello;}
hello () { echo hello }
hello () { echo hello;}
hello () { ;echo hello }
hello () { ;echo hello;}

The answer is - only number six. At least in my book, that is proof of
a completely broken syntax.

Fish also drops words like 'then' and 'do', which make the code look
less like code and more like english at the expense of brevity and
consistence. I have never found these extra keywords to be more helpful
than 'please' is in Intercal.



Everything is a command:

Here are a few more fish syntax changes:

foo=bar -> set foo bar
make && make install -> make; and make install

These changes exist to make the fish syntax more consistent.
Specifically, in fish pretty much everything is a command. Variable
assignemnts, loops and conditionals are all regular builtin commands. I
firmly belive that by making as much of the syntax as possible obey the
same rules, the language becomes more predictable and easier to learn.
It could be argued that this is trying to fit a round peg in a square
hole, bit I don't think that is the case, I've found that all these
tasks are very suitable for builtin commands. Also, it is much easier
to get help on a command than on some weird piece of syntactic sugar.
Want to know how to use the 'set' builtin? type 'set --help' and you'll
get it.



Nicer array variables:

In fish, all variables are really arrays. To define a variable 'foo'
with the elements 'a', 'b' and 'c', simply use

set foo a b c

$foo is expanded to all the elements of the array as separate elements.
Use [] to acces an element in the array, e.g. $foo[1]. You can specify
multiple elements, e.g. $foo[1 3], and you can use the seq command
inside a command substitution to slice the array, e.g. $foo[(seq 2)].
You can also set and erase parts of the array. For example, the
following loops over the $argv vector until it is empty

while count $argv >/dev/null
switch $argv[1]
...
end
set -e foo[1]
end

Compare this with the bash syntax where you use syntax that is
non-obviously (to me, at least) different from the regular variable
syntax ro reference and assign arrays, and it is hardly surprising that
very few people seem to use arrays in bash.

An extra bonus in fish is that all variables inherited from the parent
process are turned into arrays using ':' as the array separator. That
means that PATH, CDPATH, LS_COLORS and various other lists are treated
as arrays by fish. When exported to subcommands, all arrays are of
course concatenaded together using ':' as the join character.



Universal variables:

Since the dawn of time, clueless users have asked questions like 'how
can I change an environment varible in another running process?' The
answer has always been variations of 'You don't.' No longer so in fish.
Fish supports universal variables, which are variables whose value is
shared between all running fish instances with the specified user on
the specified machine. Universal variables are automatically saved
between reboots and shutdowns, so you don't need to put their values in
an init file. Universal variables have the outermost scope, meaning the
will never get used in preference of shell-specific variables, which
should minimize security implications.

Universal variables make it much more practical to use environment
variables for configuration options. Youy simply change an environemnt
variable in one shell, and the change will propagate to all sunning
shells, and it will be saved so that the new value is used after a
reboot as well. One example of environemnt variables in action can be
had by launching two fish instances in separate terminals side-by-side.
The issue the command 'set fish_color_cwd blue' and the color of the
current working directory element of the prompt will change color to
blue in both shells. Using universal variables makes it much more
convenient to set configuration options like $BROWSER, $PAGER and
$CDPATH.



Events:

Fish allows you to trigger a shellscript function at a specific time,
such as at the completion of a specific job or process, when a specific
variable changes value or when a specific signal is recieved. The
syntax for this is:

function winch_handler --on-signal WINCH; echo WINCH; end
function browser_handler --on-variable BROWSER; echo new browser is
$BROWSER; end
function process_exit_handler --on-process 123; echo process 123 died;
end

For example, bash process substitution is not natively supported by
fish, but a workalike, implemented in shellscript, is included with
fish:

function psub
# By setting the variables here, we set their scope to function local.
# This means they won't overwrite any existing global variables with
# the same name.
set -l filename
set -l funcname

# Find unique file name for writing output to
while true
set filename /tmp/.psub.(echo %self).(random);
if not test -e $filename
break;
end
end

mkfifo $filename
cat >$filename &
echo $filename

# Find unique function name
while true
set funcname __fish_psub_(random);
if not functions $funcname >/dev/null ^/dev/null
break;
end
end

# Make sure we remove fifo when caller exits
eval "function $funcname --on-job-exit caller; rm $filename; functions
--erase $funcname; end"

end

The above allows you to replace this

diff <(sort foo.txt) <(sort bar.txt)

with

diff (sort foo.txt|psub) (sort bar.txt|psub)

This is very slightly less efficient, since the output is filtered
through the cat command, but it never touches disk since a fifo is
used, and all commands can run concurrently, so the eficcency is still
more than acceptable.

A workalike of the Posix 'trap' builtin is also implemented as a
shellscript wrapper around event handlers. Being able to trigger
function calls on diverse types of events is, in my opinion, a rather
powerful language feature, and it is one I hope to extend with new
trigger types in the future.



Error reporting:

Fish tries to help the user by being verbose and specific in it's error
messages. Here are a few examples:

Trying to use Posix-style variable assignment tells you how to use fish
style variable assignments:

fish> foo=bar
fish: Unknown command 'foo=bar'. Did you mean 'set VARIABLE VALUE'?
For information on setting variable values, see the help section on
the set command by typing 'help set'.

Using Posix short circut operators, Posix style command substitution
and various other features that use a different syntax in fish will
also give such error messages with pointers to how to do the same thing
in fish.

Fish also provides you with stack traces on errors:

fish> . foo.fish
fish: Unknown command 'abc123'
/home/axel/code/c/fish_current/foo.fish (line 5): abc123
^
in function 'do_something',
called on line 7 of file
'/home/axel/code/c/fish_current/foo.fish',

in . (source) call of file '/home/axel/code/c/fish_current/foo.fish',
called on standard input,



Autoloaded functions:

Fish functions and command specific completions can be placed in
special directories, specified using the environment variables
$fish_function_path and $fish_complete_path. Such files are autoloaded
on the first invocation of completions for the command, or the first
invocation of the function. The modification time of each such file is
also tracked, so that if the file is changed, or a new one is added in
a directory with higher priority in the path, the relevant file is
reloaded. This allows one to write a huge amount of code in shellscript
without increasing either memory usage or startup time and more
importantly, without forcing the user to turn on specific features
manually. Features that aren't used will not take up memory or
processing power. Fish ships with many thousand lines of shellscript
code, but only a few hundered lines are run on startup.



Dropped features:

Fish tries to consolidate multiple strongly related Posix features into
one. One such consolidation is dropping dollar-quotes like $'\n' in
favor of allowing backslash escapes in regular strings, e.g. \n and
\x20 both work as you would expect. Other dropped features include
subshells (use fish scoping or a block of commands), math mode (use the
bc or expr commands), here documents (Use quotes or, in order to paste
large amounts of data possibly containing quotes and other illegal
characters into the shell, use ^Y to paste from the X clipboard) and
process substitution (Use the psub workalike described above).



Failiures:

While implementing new syntax features for fish, a great many different
ideas where tried out. Most of the ideas that I tried out turned out to
be bad ones, and they where dropped. I have often heard the argument
against chaing the Posix syntax that the original designers knew what
they where doing, and that chaning things will only make them worse.
This is true in the sense that if one would simply try out every gret
new idea one gets, and then make it stick around forever, even if it
turns out that the idea was a bad one, things will degenerate quickly.
But one _can_ to some extend separate the wheat from the chaff. I have
presented some of what I concluded to be wheat above. For those that
are interested, here follows some of the chaff:

Originally, when fish encountered a wildcard that had no matches, it
would silently remove that argument. This makes a huge amount of sense
sometimes, eg. when doing things like 'for i in *.txt', but does not do
what you would like when using e.g. 'ls *.txt'. I do not like the bash
method of leaving such arguments unexpanded, since it relies on the
called commands to detect the wildcard and try to report a meaningful
error. This can lead to very confusing behaviour. Instead, I've opted
to follow the same path chosen by csh, namely if all waldcards to a
specific command fail to match, the command is not executed. In
interactive mode, a warning is printed as well. This is just one of the
many things I feel csh got right. Too bad that the number of things csh
got horribly wrong is even greater...

Originally, fish made no difference between single quotes and double
quotes, they both turned of all types of argument expansion. I argued
that to embed strings into a quoted string, one quold simply use
printf. I aslo rargued that mising the two mixes code with text, which
is bad. I argued that it is confusing to have two types of quotes,
which are similar but not identical. In the end, I allowed variable
expansion in double quotes, because it saves keystrokes, because it
allwos you to make sure array variables get expanded into exactly one
token, no matter how many elements the array contains, and it allows me
get much more time for hacking instead of answering the same question
over and over again.

Originally, fish did not have any blocks at all. Instead, code was
given as arguments to other commands; e.g. the last argument to a for
loop was the list of commands to run in the loop. This is easy to
implement, but completely useless for longer scripts.



Closing comments:

There are a great many features in fish that have little to do with
syntax, like syntax highlighting, advanced tab completion, X clipboard
intergration, etc.. But this post is only mean to discuss the design
and implications of the changes made to regular shell syntax in fish.
Specifically, I'd be interested in opinions on security considerations,
regressions and further possible changes to the syntax. To try out
fish, visit http://roo.no-ip.org/fish/ or use the prepackaged version
avaialable for many systems including Debian. Fish is GPL:ed, and it
works on most Linux versions, NetBSD, FreeBSD, OS X, Solaris and
possibly Cygwin.
--
Axel
Jordan Abel
2006-03-06 20:57:17 UTC
Permalink
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ; echo $FOO
BAR
On what shell does this happen? Even on the original unix v7 bourne
shell, this isn't true. The unix v6 shell didn't support variables at
all.
l***@gmail.com
2006-03-06 22:19:20 UTC
Permalink
Post by Jordan Abel
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ; echo $FOO
BAR
On what shell does this happen? Even on the original unix v7 bourne
shell, this isn't true. The unix v6 shell didn't support variables at
all.
You are of course completely right, I got confused. I meant:

bash> FOO=BAR
bash> FOO=BAZ echo $FOO
BAR

Which is something different. And I notice that bash does indeed _not_
stray from the traditional output of BAR in the above situation. My
bad.

My point about macro expanding languages having unintuitive behaviour
still stands, though.
--
Axel
Jordan Abel
2006-03-07 00:08:03 UTC
Permalink
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ; echo $FOO
BAR
On what shell does this happen? Even on the original unix v7 bourne
shell, this isn't true. The unix v6 shell didn't support variables at
all.
bash> FOO=BAR
bash> FOO=BAZ echo $FOO
BAR
Which is something different. And I notice that bash does indeed _not_
stray from the traditional output of BAR in the above situation. My
bad.
That's because you misunderstand what the VAR=value cmd syntax is for -
it doesn't set a shell variable at all - it puts a variable in the
command's environment. The real problem is in the half-assed equivalence
between shell variables and environment variables.

try

FOO=BAR
FOO=BAZ env | grep FOO
Post by l***@gmail.com
My point about macro expanding languages having unintuitive behaviour
still stands, though.
It's only unintuitive when you misread the intent of a construct.
l***@gmail.com
2006-03-07 01:26:30 UTC
Permalink
Post by Jordan Abel
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ; echo $FOO
BAR
On what shell does this happen? Even on the original unix v7 bourne
shell, this isn't true. The unix v6 shell didn't support variables at
all.
bash> FOO=BAR
bash> FOO=BAZ echo $FOO
BAR
Which is something different. And I notice that bash does indeed _not_
stray from the traditional output of BAR in the above situation. My
bad.
That's because you misunderstand what the VAR=value cmd syntax is for -
it doesn't set a shell variable at all - it puts a variable in the
command's environment. The real problem is in the half-assed equivalence
between shell variables and environment variables.
try
FOO=BAR
FOO=BAZ env | grep FOO
I understand why that happens. I got a bit confused before since I
don't use Posix shells enough these days, but I know _why_ the above
code does what it does. I just don't think it's a good syntax.
Post by Jordan Abel
Post by l***@gmail.com
My point about macro expanding languages having unintuitive behaviour
still stands, though.
It's only unintuitive when you misread the intent of a construct.
You yourself stated, 'The real problem is in the half-assed equivalence
between shell variables and environment variables'. My point isn't that
the Posix syntax here is somehow wrong, given the abstraction of a
macro language, it makes perfect sense. But that does not make it a
_good_ syntax. Macro languages simply aren't very useful when compared
to regular procedural languages, in my experience.

As an extreme example of the same phenomenon, consider the Intercal
notion of reversing the order of the bits in a byte on output. It makes
perfect sense once you accept that Intercal uses the abstraction of
loking at IO as passing messages on notes and throwing them backwards.
If you turn your head and look at the things you've written, they will
be in reverse order because you've turned your head 180 degrees. But
from the outlook of a programmer actually wanting to write code, this
is a counter productive abstraction which makes Intercal a horrible
language to use.
--
Axel
Jordan Abel
2006-03-07 05:31:49 UTC
Permalink
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ; echo $FOO
BAR
On what shell does this happen? Even on the original unix v7 bourne
shell, this isn't true. The unix v6 shell didn't support variables at
all.
bash> FOO=BAR
bash> FOO=BAZ echo $FOO
BAR
Which is something different. And I notice that bash does indeed _not_
stray from the traditional output of BAR in the above situation. My
bad.
That's because you misunderstand what the VAR=value cmd syntax is for -
it doesn't set a shell variable at all - it puts a variable in the
command's environment. The real problem is in the half-assed equivalence
between shell variables and environment variables.
try
FOO=BAR
FOO=BAZ env | grep FOO
I understand why that happens. I got a bit confused before since I
don't use Posix shells enough these days, but I know _why_ the above
code does what it does. I just don't think it's a good syntax.
Post by Jordan Abel
Post by l***@gmail.com
My point about macro expanding languages having unintuitive behaviour
still stands, though.
It's only unintuitive when you misread the intent of a construct.
You yourself stated, 'The real problem is in the half-assed equivalence
between shell variables and environment variables'. My point isn't that
the Posix syntax here is somehow wrong, given the abstraction of a
macro language, it makes perfect sense. But that does not make it a
_good_ syntax. Macro languages simply aren't very useful when compared
to regular procedural languages, in my experience.
They're good when the main purpose is to put together a bunch of
routines whose argument set is a list of strings.
Stephane CHAZELAS
2006-03-07 08:30:40 UTC
Permalink
2006-03-6, 17:26(-08), ***@gmail.com:
[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ echo $FOO
BAR
Which is something different. And I notice that bash does indeed _not_
stray from the traditional output of BAR in the above situation. My
bad.
That's because you misunderstand what the VAR=value cmd syntax is for -
it doesn't set a shell variable at all - it puts a variable in the
command's environment. The real problem is in the half-assed equivalence
between shell variables and environment variables.
try
FOO=BAR
FOO=BAZ env | grep FOO
I understand why that happens. I got a bit confused before since I
don't use Posix shells enough these days, but I know _why_ the above
code does what it does. I just don't think it's a good syntax.
It's a direct mapping to the execve system call. It couldn't be
more intuitive if you think of it that way. And had it output
BAR above, I would have found it counter-intuitive.

[...]
Post by l***@gmail.com
You yourself stated, 'The real problem is in the half-assed equivalence
between shell variables and environment variables'. My point isn't that
the Posix syntax here is somehow wrong, given the abstraction of a
macro language, it makes perfect sense. But that does not make it a
_good_ syntax. Macro languages simply aren't very useful when compared
to regular procedural languages, in my experience.
I don't think shells ever claimed to be macro languages. How do
you define a macro language yourself.

You would have had a point if you had chose as an example:

alias a='echo a'
alias a='echo b'; b
--
Stéphane
l***@gmail.com
2006-03-07 13:58:29 UTC
Permalink
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ echo $FOO
BAR
Which is something different. And I notice that bash does indeed _not_
stray from the traditional output of BAR in the above situation. My
bad.
That's because you misunderstand what the VAR=value cmd syntax is for -
it doesn't set a shell variable at all - it puts a variable in the
command's environment. The real problem is in the half-assed equivalence
between shell variables and environment variables.
try
FOO=BAR
FOO=BAZ env | grep FOO
I understand why that happens. I got a bit confused before since I
don't use Posix shells enough these days, but I know _why_ the above
code does what it does. I just don't think it's a good syntax.
It's a direct mapping to the execve system call. It couldn't be
more intuitive if you think of it that way. And had it output
BAR above, I would have found it counter-intuitive.
The problem I have with this is that a direct mapping to a low-level
syscall is not a suitable abstraction to use as the main invocation of
commands in a high-level scripting language. Instead of trying to
provide the user with an thin execve wrapper, it might be better (in my
opinion) to focus on how to design a good _language_. A language where
variable scope makes sense from a language point of view and not an
implementors point of view. Specifically, from a language point of
view, the command

FOO=BAR echo $FOO

is intuitively viewed as being evaluated in the scope of that single
command. The parsing of the command, with substitutions lof variables
like $FOO are intuitively done in the scope of that command. In
reality, the parsing is done by the shell and only after the parsing is
done is a child forked of, but that's just implementation.
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
You yourself stated, 'The real problem is in the half-assed equivalence
between shell variables and environment variables'. My point isn't that
the Posix syntax here is somehow wrong, given the abstraction of a
macro language, it makes perfect sense. But that does not make it a
_good_ syntax. Macro languages simply aren't very useful when compared
to regular procedural languages, in my experience.
I don't think shells ever claimed to be macro languages. How do
you define a macro language yourself.
Well, that is a bit of antropomorphisation on my part. The shells don't
really talk to me. But according to Wikipedia "A macro language is a
programming language in which all or most computation is done by
expanding macros", which is very much true for shells. Variable
substitution, wildcard substitution, alias substitution and many other
language features work through macro expansion or an emulation of macro
expansion.
Post by Stephane CHAZELAS
alias a='echo a'
alias a='echo b'; b
Typo. I think you meant

alias a='echo a'
alias a='echo b'; a

which will output 'a', and not 'b'. But you are completely right. That
is a _much_ better example of the macro-like properties of shells, and
one that I really feel should be removed. I noticed that this is one is
not fixed in zsh either, at least no by default. It is also one which
is removed in fish, where writing

function a; echo a; end; a
function a; echo b; end; a

will output 'a', not 'b'.

Fish only has functions, no aliases. One could of course add aliases by
doing something like:

function alias --description "A wrapper for providing function
definitions using a weaker, but simpler syntax"
set -l name $argv[1]
set -l body $argv[2]
eval "function $name; $body \$argv; end"
end

alias ll "ls -l"

but that is beside the point.

I chose bad examples in my original post, I obviously haven't been
using enough shellscripts recently to clearly remember the things I
dislike the most. Thank you for the correction.
Post by Stephane CHAZELAS
--
Stéphane
--
Axel
Stephane Chazelas
2006-03-07 14:26:16 UTC
Permalink
On 7 Mar 2006 05:58:29 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane CHAZELAS
alias a='echo a'
alias a='echo b'; b
Typo. I think you meant
alias a='echo a'
alias a='echo b'; a
which will output 'a', and not 'b'. But you are completely right. That
is a _much_ better example of the macro-like properties of shells, and
one that I really feel should be removed. I noticed that this is one is
not fixed in zsh either, at least no by default. It is also one which
is removed in fish, where writing
function a; echo a; end; a
function a; echo b; end; a
Those are not aliases. aliases are meant to be aliases. It's
important that they are expanded very early in the parsing
process. It is meant to modify the input. If you don't want that
use functions.

f() { echo a;}
f() { echo b;}; f

will output b

You can do in POSIX shells as silly things as:

alias a='echo $(( 1'

a + 1))

it's a feature.

aliases are aliases, not functions.

zsh even has global aliases (every token can be replaced)

alias -g ...=../..

cd ...

[...]
Post by l***@gmail.com
function alias --description "A wrapper for providing function
definitions using a weaker, but simpler syntax"
set -l name $argv[1]
set -l body $argv[2]
eval "function $name; $body \$argv; end"
end
that is not the same.
Post by l***@gmail.com
alias ll "ls -l"
alias ls 'ls -F'

will probably create an infinite loop.
--
Stephane
l***@gmail.com
2006-03-07 14:54:45 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane CHAZELAS
alias a='echo a'
alias a='echo b'; b
Typo. I think you meant
alias a='echo a'
alias a='echo b'; a
which will output 'a', and not 'b'. But you are completely right. That
is a _much_ better example of the macro-like properties of shells, and
one that I really feel should be removed. I noticed that this is one is
not fixed in zsh either, at least no by default. It is also one which
is removed in fish, where writing
function a; echo a; end; a
function a; echo b; end; a
Those are not aliases. aliases are meant to be aliases. It's
important that they are expanded very early in the parsing
process. It is meant to modify the input. If you don't want that
use functions.
f() { echo a;}
f() { echo b;}; f
will output b
alias a='echo $(( 1'
a + 1))
it's a feature.
aliases are aliases, not functions.
Absolutley. Aliases are a form of macro expansion, something that, as
you show above, can be used to do many rather strange things. I
consider this a _bad_ thing in that the number of sane uses for this is
small in comparison to the number of typos and minor misunderstandings
that lead to subtle but painful bugs. I don't mean to make the tired
old argument that dangerous features are bad, what I mean to say is
that the usefullness of dangerous features has to be weighed against
the potential to introduce subtle, evil bugs. Your original example is
a perfect example of a situation where the macro expanding properties
of aliases will cause such havoc.

I simply think that a real function syntax is much more suited for
programming than simple alias substitution.
Post by Stephane Chazelas
zsh even has global aliases (every token can be replaced)
alias -g ...=../..
cd ...
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
function alias --description "A wrapper for providing function
definitions using a weaker, but simpler syntax"
set -l name $argv[1]
set -l body $argv[2]
eval "function $name; $body \$argv; end"
end
that is not the same.
Post by l***@gmail.com
alias ll "ls -l"
alias ls 'ls -F'
will probably create an infinite loop.
Actually it won't, since fish explicitly checks for unconditional
recursion. These checks can of course be fooled, but your example above
will call the ls command, not the ls function.
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-07 15:07:58 UTC
Permalink
On 7 Mar 2006 06:54:45 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
I simply think that a real function syntax is much more suited for
programming than simple alias substitution.
Yes, that's why you use functions for programming and aliases
for aliasing.

There are some times where aliases are useful for programming.
But some shells disable them.

Like:

alias die='{ echo >&2 ERROR; return 1; }'

that can't be implemented as a function.
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
--
Stephane
l***@gmail.com
2006-03-07 18:20:10 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
I simply think that a real function syntax is much more suited for
programming than simple alias substitution.
Yes, that's why you use functions for programming and aliases
for aliasing.
There are some times where aliases are useful for programming.
But some shells disable them.
alias die='{ echo >&2 ERROR; return 1; }'
that can't be implemented as a function.
Which is a _good_ thing, in my opinion. This is like the difference
between Basic and C; in Basic you can jump to any line, but in C you
can only jump to specific code entry points, called functions. This is
the same thing, only with code exit points instead. If you hide the
code exit point like that, it's impossible to know by simply scanning
the code for 'exit', 'return' and 'break' where the exit points are,
meaning it's much harder to see the code flow.

This is a tradeoff between supporting 'cool hacks' and supporting
maintainable coding practices, and to me the latter is more important.
Post by Stephane Chazelas
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
Just tried it on my computer, using zsh 4.2.1, and this doesn't work.
Writing '..' in the commandline doesn't transport me to the parent
directory.
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-07 18:33:07 UTC
Permalink
On 7 Mar 2006 10:20:10 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
alias die='{ echo >&2 ERROR; return 1; }'
that can't be implemented as a function.
Which is a _good_ thing, in my opinion. This is like the difference
between Basic and C; in Basic you can jump to any line, but in C you
can only jump to specific code entry points, called functions. This is
the same thing, only with code exit points instead. If you hide the
code exit point like that, it's impossible to know by simply scanning
the code for 'exit', 'return' and 'break' where the exit points are,
meaning it's much harder to see the code flow.
This is a tradeoff between supporting 'cool hacks' and supporting
maintainable coding practices, and to me the latter is more important.
Well, it depends. A shell is mostly a tool for interactive use.
You don't want to bridle the user there by adding constraints
that he doesn't care about. At the prompt, I don't care if what
I type is maintainable or nice code. That's one reason I think
it difficult to have a good shell that is also a good
programming language.

BTW, if we take that comparison further with C, aliases are like
preprocessor macros.
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
Just tried it on my computer, using zsh 4.2.1, and this doesn't work.
Writing '..' in the commandline doesn't transport me to the parent
directory.
[...]

Sorry, you need:

setopt autocd

I never use that feature, I should disable that option. I prefer
to use cd so that the completion only completes directories (or
~user or +<dirstack-number>)
--
Stephane
Kurt Swanson
2006-03-07 18:40:36 UTC
Permalink
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
Just tried it on my computer, using zsh 4.2.1, and this doesn't work.
Writing '..' in the commandline doesn't transport me to the parent
directory.
[...]
setopt autocd
% setopt autocd
% ...
zsh: correct '...' to '..' [nyae]? n
zsh: command not found: ...
%
--
© 2005 Kurt Swanson AB
Stephane CHAZELAS
2006-03-08 08:02:28 UTC
Permalink
Post by Kurt Swanson
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
Just tried it on my computer, using zsh 4.2.1, and this doesn't work.
Writing '..' in the commandline doesn't transport me to the parent
directory.
[...]
setopt autocd
% setopt autocd
% ...
zsh: correct '...' to '..' [nyae]? n
zsh: command not found: ...
%
See earlier messages. It supposes an
alias -g ...=../..
--
Stéphane
Kurt Swanson
2006-03-08 18:29:41 UTC
Permalink
Post by Stephane CHAZELAS
Post by Kurt Swanson
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
Just tried it on my computer, using zsh 4.2.1, and this doesn't work.
Writing '..' in the commandline doesn't transport me to the parent
directory.
[...]
setopt autocd
% setopt autocd
% ...
zsh: correct '...' to '..' [nyae]? n
zsh: command not found: ...
%
See earlier messages. It supposes an
alias -g ...=../..
Sorry I missed that in the 373 messages the two of you have been
trading over whose shell is bigger...

What would be cool in a shell is if one could define something like:

alias --regexp ..*=«some-substitution-matching»

In this specific case, using zsh:

% alias -s .=cddots
% cddots () {
cd `echo $1 | sed -e 's-\.-\.\./-g'`
}
% ...
zsh: command not found: ...
%

I can't figure out why this doesn't work. The alias is accepted,
i.e. it shows with "alias -s". I tried a number of suffix aliases
definitions, and none with a leading "." are recognized, although all
are accepted and defined. Changing the "." to any string of letters
"works" (in that cddots at least gets called...)

According to the -s description under alias in man zshbuiltins, this
should work. Maybe a bug?
--
© 2006 Kurt Swanson AB
Stephane CHAZELAS
2006-03-08 19:39:59 UTC
Permalink
2006-03-08, 10:29(-08), Kurt Swanson:
[...]
Post by Kurt Swanson
% alias -s .=cddots
% cddots () {
cd `echo $1 | sed -e 's-\.-\.\./-g'`
}
% ...
zsh: command not found: ...
%
I can't figure out why this doesn't work. The alias is accepted,
i.e. it shows with "alias -s". I tried a number of suffix aliases
definitions, and none with a leading "." are recognized, although all
are accepted and defined. Changing the "." to any string of letters
"works" (in that cddots at least gets called...)
According to the -s description under alias in man zshbuiltins, this
should work. Maybe a bug?
I would guess it's because zsh parses

"..." as "<..>.<>", and looks <> up for the list of possible -s
aliases. It looks like you can't assign such aliases for
anything with a dot.

~$ alias -s 'a.b=echo'
~$ a.a.b
zsh: command not found: a.a.b

Could have been useful for things like *.tar.gz, though.


What you can do, and I'm sure Axel will find it neater ;) is

rationalise-dot() {
if [[ $LBUFFER = *.. ]]; then
LBUFFER+=/..
else
LBUFFER+=.
fi
}

zle -N rationalise-dot
bindkey . rationalise-dot
--
Stéphane
Kurt Swanson
2006-03-08 20:36:55 UTC
Permalink
Post by Stephane CHAZELAS
[...]
Post by Kurt Swanson
% alias -s .=cddots
% cddots () {
cd `echo $1 | sed -e 's-\.-\.\./-g'`
}
% ...
zsh: command not found: ...
%
I can't figure out why this doesn't work. The alias is accepted,
i.e. it shows with "alias -s". I tried a number of suffix aliases
definitions, and none with a leading "." are recognized, although all
are accepted and defined. Changing the "." to any string of letters
"works" (in that cddots at least gets called...)
According to the -s description under alias in man zshbuiltins, this
should work. Maybe a bug?
I would guess it's because zsh parses
"..." as "<..>.<>", and looks <> up for the list of possible -s
aliases. It looks like you can't assign such aliases for
anything with a dot.
So this is either a bug in the parser, or a bug in the man page...
Post by Stephane CHAZELAS
~$ alias -s 'a.b=echo'
~$ a.a.b
zsh: command not found: a.a.b
Could have been useful for things like *.tar.gz, though.
Yeah, and a myriad of others...

It seems like suffix aliases were only conceived to be used for files.
Post by Stephane CHAZELAS
What you can do, and I'm sure Axel will find it neater ;) is
rationalise-dot() {
if [[ $LBUFFER = *.. ]]; then
LBUFFER+=/..
else
LBUFFER+=.
fi
}
zle -N rationalise-dot
bindkey . rationalise-dot
Well, that's fun, but a bit of an overhead...
--
© 2006 Kurt Swanson AB
l***@gmail.com
2006-03-09 01:09:58 UTC
Permalink
Post by Stephane CHAZELAS
[...]
Post by Kurt Swanson
% alias -s .=cddots
% cddots () {
cd `echo $1 | sed -e 's-\.-\.\./-g'`
}
% ...
zsh: command not found: ...
%
I can't figure out why this doesn't work. The alias is accepted,
i.e. it shows with "alias -s". I tried a number of suffix aliases
definitions, and none with a leading "." are recognized, although all
are accepted and defined. Changing the "." to any string of letters
"works" (in that cddots at least gets called...)
According to the -s description under alias in man zshbuiltins, this
should work. Maybe a bug?
I would guess it's because zsh parses
"..." as "<..>.<>", and looks <> up for the list of possible -s
aliases. It looks like you can't assign such aliases for
anything with a dot.
~$ alias -s 'a.b=echo'
~$ a.a.b
zsh: command not found: a.a.b
Could have been useful for things like *.tar.gz, though.
What you can do, and I'm sure Axel will find it neater ;) is
Much nicer. You can actually _see_ what will be executed.
Post by Stephane CHAZELAS
rationalise-dot() {
if [[ $LBUFFER = *.. ]]; then
LBUFFER+=/..
else
LBUFFER+=.
fi
}
zle -N rationalise-dot
bindkey . rationalise-dot
--
Stéphane
--
Axel
l***@gmail.com
2006-03-09 01:03:14 UTC
Permalink
Post by Kurt Swanson
Post by Stephane CHAZELAS
Post by Kurt Swanson
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
Just tried it on my computer, using zsh 4.2.1, and this doesn't work.
Writing '..' in the commandline doesn't transport me to the parent
directory.
[...]
setopt autocd
% setopt autocd
% ...
zsh: correct '...' to '..' [nyae]? n
zsh: command not found: ...
%
See earlier messages. It supposes an
alias -g ...=../..
Sorry I missed that in the 373 messages the two of you have been
trading over whose shell is bigger...
Heh. Here I though I was having a rewarding discussion on the features
of respective shells in a completely civil and troll-free manner. And
all the time I was doing something else. Oh well, I'm having a blast
either way.
Post by Kurt Swanson
alias --regexp ..*=«some-substitution-matching»
% alias -s .=cddots
% cddots () {
cd `echo $1 | sed -e 's-\.-\.\./-g'`
}
% ...
zsh: command not found: ...
%
I can't figure out why this doesn't work. The alias is accepted,
i.e. it shows with "alias -s". I tried a number of suffix aliases
definitions, and none with a leading "." are recognized, although all
are accepted and defined. Changing the "." to any string of letters
"works" (in that cddots at least gets called...)
According to the -s description under alias in man zshbuiltins, this
should work. Maybe a bug?
--
© 2006 Kurt Swanson AB
--
Axel
Kurt Swanson
2006-03-09 01:12:28 UTC
Permalink
Post by l***@gmail.com
Post by Kurt Swanson
Post by Stephane CHAZELAS
Post by Kurt Swanson
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
Just tried it on my computer, using zsh 4.2.1, and this doesn't work.
Writing '..' in the commandline doesn't transport me to the parent
directory.
[...]
setopt autocd
% setopt autocd
% ...
zsh: correct '...' to '..' [nyae]? n
zsh: command not found: ...
%
See earlier messages. It supposes an
alias -g ...=../..
Sorry I missed that in the 373 messages the two of you have been
trading over whose shell is bigger...
Heh. Here I though I was having a rewarding discussion on the features
of respective shells in a completely civil and troll-free manner. And
all the time I was doing something else. Oh well, I'm having a blast
either way.
Troll-free, and civil, yes. But also excruciating! For example,
who's shell better handles completion on certain deprecated network
commands....

Det stör mig inte, i alla fall...
--
© 2006 Kurt Swanson AB
l***@gmail.com
2006-03-07 19:06:30 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
alias die='{ echo >&2 ERROR; return 1; }'
that can't be implemented as a function.
Which is a _good_ thing, in my opinion. This is like the difference
between Basic and C; in Basic you can jump to any line, but in C you
can only jump to specific code entry points, called functions. This is
the same thing, only with code exit points instead. If you hide the
code exit point like that, it's impossible to know by simply scanning
the code for 'exit', 'return' and 'break' where the exit points are,
meaning it's much harder to see the code flow.
This is a tradeoff between supporting 'cool hacks' and supporting
maintainable coding practices, and to me the latter is more important.
Well, it depends. A shell is mostly a tool for interactive use.
You don't want to bridle the user there by adding constraints
that he doesn't care about. At the prompt, I don't care if what
I type is maintainable or nice code. That's one reason I think
it difficult to have a good shell that is also a good
programming language.
Sure, the reuirements are different. But I have found that a lot of the
things that you can get through ugly hacks can be done in a non-hackish
way as well, if you just give it a bit of thought. Saying 'this is only
for interactive mode, so it's ok that the syntax is a horrible mess' is
a cop out in my opinion.

As an example of rhis, in zsh it is not uncommon to use e.g. 'L' as a
global alias for '|less;'. In fish, I have added the following
keybinding instead:

"\M-p": if commandline -j|grep -v 'less *$' >/dev/null; commandline -aj
"|less;"; end

What it does is if you press Meta-p, check if the current job
definition ends with 'less' and if not, append the string '|less;'.

Advantages:

* Readable. You can see on the commandline what the code does.
* No rare problems when you actually _want_ to use L as an argument.
Post by Stephane Chazelas
BTW, if we take that comparison further with C, aliases are like
preprocessor macros.
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
In fish, if you enter a directory name instead of a command, it is
assumed that you want to cd to that directory, so you can simply write
'../..' to do the same thing, which is one character shorter. ;-)
Same for zsh, then you can enter "..." which is 2 characters
shorter ;)
Just tried it on my computer, using zsh 4.2.1, and this doesn't work.
Writing '..' in the commandline doesn't transport me to the parent
directory.
[...]
setopt autocd
I never use that feature, I should disable that option. I prefer
to use cd so that the completion only completes directories (or
~user or +<dirstack-number>)
*meh*
Yet again zsh implements a huge number of features, but not in a well
thought out way, and hence the feature is off by default. In fish, tab
completions understand both implicit cd and CDPATH.
Post by Stephane Chazelas
--
Stephane
--
Axel
Jordan Abel
2006-03-07 19:14:12 UTC
Permalink
Post by l***@gmail.com
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
alias die='{ echo >&2 ERROR; return 1; }'
that can't be implemented as a function.
Which is a _good_ thing, in my opinion. This is like the difference
between Basic and C; in Basic you can jump to any line, but in C you
can only jump to specific code entry points, called functions. This is
the same thing, only with code exit points instead. If you hide the
code exit point like that, it's impossible to know by simply scanning
the code for 'exit', 'return' and 'break' where the exit points are,
meaning it's much harder to see the code flow.
This is a tradeoff between supporting 'cool hacks' and supporting
maintainable coding practices, and to me the latter is more important.
Well, it depends. A shell is mostly a tool for interactive use.
You don't want to bridle the user there by adding constraints
that he doesn't care about. At the prompt, I don't care if what
I type is maintainable or nice code. That's one reason I think
it difficult to have a good shell that is also a good
programming language.
Sure, the reuirements are different. But I have found that a lot of the
things that you can get through ugly hacks can be done in a non-hackish
way as well, if you just give it a bit of thought. Saying 'this is only
for interactive mode, so it's ok that the syntax is a horrible mess' is
a cop out in my opinion.
As an example of rhis, in zsh it is not uncommon to use e.g. 'L' as a
global alias for '|less;'. In fish, I have added the following
"\M-p": if commandline -j|grep -v 'less *$' >/dev/null; commandline -aj
"|less;"; end
What it does is if you press Meta-p, check if the current job
definition ends with 'less' and if not, append the string '|less;'.
It'd be simpler to append it unconditionally. Processes are cheap these
days.
l***@gmail.com
2006-03-07 23:06:46 UTC
Permalink
[...]
Post by Jordan Abel
Post by l***@gmail.com
Sure, the reuirements are different. But I have found that a lot of the
things that you can get through ugly hacks can be done in a non-hackish
way as well, if you just give it a bit of thought. Saying 'this is only
for interactive mode, so it's ok that the syntax is a horrible mess' is
a cop out in my opinion.
As an example of rhis, in zsh it is not uncommon to use e.g. 'L' as a
global alias for '|less;'. In fish, I have added the following
"\M-p": if commandline -j|grep -v 'less *$' >/dev/null; commandline -aj
"|less;"; end
What it does is if you press Meta-p, check if the current job
definition ends with 'less' and if not, append the string '|less;'.
It'd be simpler to append it unconditionally. Processes are cheap these
days.
I'm sure it would. It's just me trying to be clever, not wanting to
turn the prompt into 'foo|less|less|less|less|less' on repeated
keypresses. Perhaps this is overdesigning things a bit.

The part that I do like, however is that if places the |less at the
correct place even if there are multiple commans on the prompt. For
example, given

echo foo; whoami
^
Cursor is here

pressing Meta-p will result in

echo foo|less; whoami

Which I think is nice.
--
Axel
Jordan Abel
2006-03-08 00:22:13 UTC
Permalink
Post by l***@gmail.com
[...]
Post by Jordan Abel
Post by l***@gmail.com
Sure, the reuirements are different. But I have found that a lot of the
things that you can get through ugly hacks can be done in a non-hackish
way as well, if you just give it a bit of thought. Saying 'this is only
for interactive mode, so it's ok that the syntax is a horrible mess' is
a cop out in my opinion.
As an example of rhis, in zsh it is not uncommon to use e.g. 'L' as a
global alias for '|less;'. In fish, I have added the following
"\M-p": if commandline -j|grep -v 'less *$' >/dev/null; commandline -aj
"|less;"; end
What it does is if you press Meta-p, check if the current job
definition ends with 'less' and if not, append the string '|less;'.
It'd be simpler to append it unconditionally. Processes are cheap these
days.
I'm sure it would. It's just me trying to be clever, not wanting to
turn the prompt into 'foo|less|less|less|less|less' on repeated
keypresses. Perhaps this is overdesigning things a bit.
Have the keypress in question both append the pipe to less and dispatch
the command line for execution.
Post by l***@gmail.com
The part that I do like, however is that if places the |less at the
correct place even if there are multiple commans on the prompt. For
example, given
echo foo; whoami
^
Cursor is here
pressing Meta-p will result in
echo foo|less; whoami
Which I think is nice.
A key-binding to "execute the command piped to less", all in one action,
would be even nicer.

[hell, even i might buy your shell then - or at least download it, since
it's [i hope] open-source]
l***@gmail.com
2006-03-08 10:07:08 UTC
Permalink
Post by Jordan Abel
Post by l***@gmail.com
[...]
Post by Jordan Abel
Post by l***@gmail.com
Sure, the reuirements are different. But I have found that a lot of the
things that you can get through ugly hacks can be done in a non-hackish
way as well, if you just give it a bit of thought. Saying 'this is only
for interactive mode, so it's ok that the syntax is a horrible mess' is
a cop out in my opinion.
As an example of rhis, in zsh it is not uncommon to use e.g. 'L' as a
global alias for '|less;'. In fish, I have added the following
"\M-p": if commandline -j|grep -v 'less *$' >/dev/null; commandline -aj
"|less;"; end
What it does is if you press Meta-p, check if the current job
definition ends with 'less' and if not, append the string '|less;'.
It'd be simpler to append it unconditionally. Processes are cheap these
days.
I'm sure it would. It's just me trying to be clever, not wanting to
turn the prompt into 'foo|less|less|less|less|less' on repeated
keypresses. Perhaps this is overdesigning things a bit.
Have the keypress in question both append the pipe to less and dispatch
the command line for execution.
Post by l***@gmail.com
The part that I do like, however is that if places the |less at the
correct place even if there are multiple commans on the prompt. For
example, given
echo foo; whoami
^
Cursor is here
pressing Meta-p will result in
echo foo|less; whoami
Which I think is nice.
A key-binding to "execute the command piped to less", all in one action,
would be even nicer.
[hell, even i might buy your shell then - or at least download it, since
it's [i hope] open-source]
Download and install fish, and add the following to ~/.fish_inputrc:

"\M-l": commandline -aj "|less;"; eval (commandline); commandline ""

The first command appends'|less;' to job under the cursor on the
commandline, the second calls eval for the current contents of the
commandline and the third one clears the commandline.

I'll probably add this to the next fish release, it does seem like a
better thing to do.

Fish is GPL, btw.
--
Axel
Stephane Chazelas
2006-03-08 10:35:30 UTC
Permalink
On 8 Mar 2006 02:07:08 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Jordan Abel
A key-binding to "execute the command piped to less", all in one action,
would be even nicer.
[hell, even i might buy your shell then - or at least download it, since
it's [i hope] open-source]
"\M-l": commandline -aj "|less;"; eval (commandline); commandline ""
The first command appends'|less;' to job under the cursor on the
commandline, the second calls eval for the current contents of the
commandline and the third one clears the commandline.
I'll probably add this to the next fish release, it does seem like a
better thing to do.
[...]

With zsh:

append-pipe-pager-and-accept-line() {
emulate -L zsh
setopt extendedglob

[[ $BUFFER = *\|[[:blank:]]#less[[:blank:]]# ]] ||
BUFFER="{$BUFFER} | ${PAGER:-more}"

zle accept-line
}

zle -N append-pipe-pager-and-accept-line

bindkey '\el' append-pipe-pager-and-accept-line
--
Stephane
l***@gmail.com
2006-03-08 10:43:31 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
A key-binding to "execute the command piped to less", all in one action,
would be even nicer.
[hell, even i might buy your shell then - or at least download it, since
it's [i hope] open-source]
"\M-l": commandline -aj "|less;"; eval (commandline); commandline ""
The first command appends'|less;' to job under the cursor on the
commandline, the second calls eval for the current contents of the
commandline and the third one clears the commandline.
I'll probably add this to the next fish release, it does seem like a
better thing to do.
[...]
append-pipe-pager-and-accept-line() {
emulate -L zsh
setopt extendedglob
[[ $BUFFER = *\|[[:blank:]]#less[[:blank:]]# ]] ||
BUFFER="{$BUFFER} | ${PAGER:-more}"
zle accept-line
}
zle -N append-pipe-pager-and-accept-line
bindkey '\el' append-pipe-pager-and-accept-line
Sure. I wasn't trying to say that this wasn't possible in other shells,
only that ugly hacks, like global aliases, generally are possible to
implement in a non-ugly, non-hackish way that is comparatively
convenient.
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane CHAZELAS
2006-03-08 08:01:35 UTC
Permalink
2006-03-7, 11:06(-08), ***@gmail.com:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
setopt autocd
I never use that feature, I should disable that option. I prefer
to use cd so that the completion only completes directories (or
~user or +<dirstack-number>)
*meh*
Yet again zsh implements a huge number of features, but not in a well
thought out way, and hence the feature is off by default. In fish, tab
completions understand both implicit cd and CDPATH.
[...]

Of course, zsh does as well, but on the first word, obviously,
it also completes command names and variable names (for VAR=val
value), while after "cd" it only completes directories.
--
Stéphane
Jordan Abel
2006-03-07 18:58:55 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
I simply think that a real function syntax is much more suited for
programming than simple alias substitution.
Yes, that's why you use functions for programming and aliases
for aliasing.
There are some times where aliases are useful for programming.
But some shells disable them.
alias die='{ echo >&2 ERROR; return 1; }'
that can't be implemented as a function.
die() {
echo >&2 ERROR;
exit 1;
}
Stephane CHAZELAS
2006-03-08 08:04:39 UTC
Permalink
Post by Jordan Abel
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
I simply think that a real function syntax is much more suited for
programming than simple alias substitution.
Yes, that's why you use functions for programming and aliases
for aliasing.
There are some times where aliases are useful for programming.
But some shells disable them.
alias die='{ echo >&2 ERROR; return 1; }'
that can't be implemented as a function.
die() {
echo >&2 ERROR;
exit 1;
}
That exits the script, not the current function.
--
Stéphane
Jordan Abel
2006-03-08 08:15:16 UTC
Permalink
Post by Stephane CHAZELAS
Post by Jordan Abel
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
I simply think that a real function syntax is much more suited for
programming than simple alias substitution.
Yes, that's why you use functions for programming and aliases
for aliasing.
There are some times where aliases are useful for programming.
But some shells disable them.
alias die='{ echo >&2 ERROR; return 1; }'
that can't be implemented as a function.
die() {
echo >&2 ERROR;
exit 1;
}
That exits the script, not the current function.
oh. I wasn't aware that was the effect you were going for.

What exactly do you want? multiple breakout 'return'? _no_ language has
that - the closest thing is probably longjmp in c.
Stephane CHAZELAS
2006-03-08 08:25:14 UTC
Permalink
2006-03-8, 08:15(+00), Jordan Abel:
[...]
Post by Jordan Abel
Post by Stephane CHAZELAS
Post by Jordan Abel
die() {
echo >&2 ERROR;
exit 1;
}
That exits the script, not the current function.
oh. I wasn't aware that was the effect you were going for.
What exactly do you want? multiple breakout 'return'? _no_ language has
that - the closest thing is probably longjmp in c.
No I just want the equivalent of:

#define RETURN_WITH_ERROR { fputs("ERROR\n", stderr); return 1; }

so that I can use it as:

void f() {
if (blah) RETURN_WITH_ERROR;

if (foo) RETURN_WITH_ERROR;

if (bar) RETURN_WITH_ERROR;
}

(yes I know, I could have written it otherwise or with a goto,
that was just to show the point).
--
Stéphane
l***@gmail.com
2006-03-08 10:59:13 UTC
Permalink
Post by Stephane CHAZELAS
[...]
Post by Jordan Abel
Post by Stephane CHAZELAS
Post by Jordan Abel
die() {
echo >&2 ERROR;
exit 1;
}
That exits the script, not the current function.
oh. I wasn't aware that was the effect you were going for.
What exactly do you want? multiple breakout 'return'? _no_ language has
that - the closest thing is probably longjmp in c.
#define RETURN_WITH_ERROR { fputs("ERROR\n", stderr); return 1; }
void f() {
if (blah) RETURN_WITH_ERROR;
if (foo) RETURN_WITH_ERROR;
if (bar) RETURN_WITH_ERROR;
}
(yes I know, I could have written it otherwise or with a goto,
that was just to show the point).
Which is my point exactly. You can do the same things witout resorting
to code obfuscation using aliases; I have yet to see an example of
where aliases help more than they hurt.

The '...' global alias could be replaced by a keybinding, e.g. Meta-g,
that inserts '../..' at the current cursor position. This means that
your history file will always be in plan, sane, shellscript, and it
means that you won't accidentally trigger the script when you didn't
mean to.

The return with error described above could be implemented in any
number of ways, including

function f_internal
blah; or return 1
foo; or return 1
bar; or return 1
end

function f
if not f_internal
echo ERROR >&2
return 1
end
end

Notice that you can actually _see_ all the exit points of the function,
making it much easier to predict the code flow.
Post by Stephane CHAZELAS
--
Stéphane
--
Axel
laura fairhead
2006-03-08 20:58:15 UTC
Permalink
Post by Stephane CHAZELAS
[...]
Post by Jordan Abel
Post by Stephane CHAZELAS
Post by Jordan Abel
die() {
echo >&2 ERROR;
exit 1;
}
That exits the script, not the current function.
oh. I wasn't aware that was the effect you were going for.
What exactly do you want? multiple breakout 'return'? _no_ language has
that - the closest thing is probably longjmp in c.
#define RETURN_WITH_ERROR { fputs("ERROR\n", stderr); return 1; }
void f() {
if (blah) RETURN_WITH_ERROR;
if (foo) RETURN_WITH_ERROR;
if (bar) RETURN_WITH_ERROR;
}
Hi Stephane,

hOW about-

RETURN_WITH_ERROR='{ echo >&2 "ERROR"; return 1; }'

f(){
if blah; then eval "$RETURN_WITH_ERROR"; fi
if foo; then eval "$RETURN_WITH_ERROR"; fi
if bar; then eval "$RETURN_WITH_ERROR"; fi
}

You can put entire function to be inlined in the variable
if you want args to the function it might be more awkward
mmmm ;)

seeyafrom
laura
Post by Stephane CHAZELAS
(yes I know, I could have written it otherwise or with a goto,
that was just to show the point).
--
Stéphane
--
echo ***@ittnreen.tocm |sed 's/\(.\)\(.\)/\2\1/g'
Stephane Chazelas
2006-03-09 09:13:53 UTC
Permalink
On Wed, 8 Mar 2006 20:58:15 +0000 (UTC), laura fairhead wrote:
[...]
Post by laura fairhead
hOW about-
RETURN_WITH_ERROR='{ echo >&2 "ERROR"; return 1; }'
f(){
if blah; then eval "$RETURN_WITH_ERROR"; fi
if foo; then eval "$RETURN_WITH_ERROR"; fi
if bar; then eval "$RETURN_WITH_ERROR"; fi
}
You can put entire function to be inlined in the variable
if you want args to the function it might be more awkward
mmmm ;)
[...]

Indeed, or:

RETURN_WITH_ERROR='eval echo >&2 "ERROR"; return 1'

f() {
... || $RETURN_WITH_ERROR
}

But that relies on the current value of IFS

CODE='
rm -f ./*.tmp
echo >&2 "ERROR"; return 1
'
RETURN_WITH_ERROR='eval eval "$CODE"'

f() {
... || $RETURN_WITH_ERROR
}

Would be slightly better with regards to word splitting and
filename generation.

That could be considered as a dirty hack, though I believe ;)
--
Stephane
Jordan Abel
2006-03-07 18:56:48 UTC
Permalink
Post by l***@gmail.com
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ echo $FOO
BAR
Which is something different. And I notice that bash does indeed _not_
stray from the traditional output of BAR in the above situation. My
bad.
That's because you misunderstand what the VAR=value cmd syntax is for -
it doesn't set a shell variable at all - it puts a variable in the
command's environment. The real problem is in the half-assed equivalence
between shell variables and environment variables.
try
FOO=BAR
FOO=BAZ env | grep FOO
I understand why that happens. I got a bit confused before since I
don't use Posix shells enough these days, but I know _why_ the above
code does what it does. I just don't think it's a good syntax.
It's a direct mapping to the execve system call. It couldn't be
more intuitive if you think of it that way. And had it output
BAR above, I would have found it counter-intuitive.
The problem I have with this is that a direct mapping to a low-level
syscall is not a suitable abstraction to use as the main invocation of
commands in a high-level scripting language.
A shell may incidentally be a scripting language, but its primary
purpose is to run other programs. Everything else is a way to determine
what files those programs have open, choose what programs get executed
in what order, choose what the contents of the argument list and
environment list for those programs are, and choose whether/when to wait
for those programs to finish.
Post by l***@gmail.com
Instead of trying to provide the user with an thin execve wrapper, it
might be better (in my opinion) to focus on how to design a good
_language_.
What you consider a good language, and I may even agree, would NOT make
a good shell.
Post by l***@gmail.com
Well, that is a bit of antropomorphisation on my part. The shells
don't really talk to me. But according to Wikipedia "A macro language
is a programming language in which all or most computation is done by
expanding macros", which is very much true for shells. Variable
substitution, wildcard substitution, alias substitution and many other
language features work through macro expansion or an emulation of
macro expansion.
The shell doesn't do computation. it runs programs. I would say that
"all or most computation" is done not by expanding macros, but by
calling external programs.
l***@gmail.com
2006-03-07 23:06:57 UTC
Permalink
Post by Jordan Abel
Post by l***@gmail.com
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
bash> FOO=BAR
bash> FOO=BAZ echo $FOO
BAR
Which is something different. And I notice that bash does indeed _not_
stray from the traditional output of BAR in the above situation. My
bad.
That's because you misunderstand what the VAR=value cmd syntax is for -
it doesn't set a shell variable at all - it puts a variable in the
command's environment. The real problem is in the half-assed equivalence
between shell variables and environment variables.
try
FOO=BAR
FOO=BAZ env | grep FOO
I understand why that happens. I got a bit confused before since I
don't use Posix shells enough these days, but I know _why_ the above
code does what it does. I just don't think it's a good syntax.
It's a direct mapping to the execve system call. It couldn't be
more intuitive if you think of it that way. And had it output
BAR above, I would have found it counter-intuitive.
The problem I have with this is that a direct mapping to a low-level
syscall is not a suitable abstraction to use as the main invocation of
commands in a high-level scripting language.
A shell may incidentally be a scripting language, but its primary
purpose is to run other programs. Everything else is a way to determine
what files those programs have open, choose what programs get executed
in what order, choose what the contents of the argument list and
environment list for those programs are, and choose whether/when to wait
for those programs to finish.
Replace 'program' with 'function' and you have a pretty good
description of e.g. C. I agree that different languages have different
requirements. Shells are languages designed to run other programs which
is a rather special domain, much like Matlab is a language for
numerical calculations. Domain specific languages have very special
requirements, different from general purpose languages like Java C++ or
Python. I think the fish design fulfills the requirements of a shell
language very well, though.
Post by Jordan Abel
Post by l***@gmail.com
Instead of trying to provide the user with an thin execve wrapper, it
might be better (in my opinion) to focus on how to design a good
_language_.
What you consider a good language, and I may even agree, would NOT make
a good shell.
I'm not trying to argue that the shell should be a general purpose
language. Quite the contrary, I want it to solve a very specific domain
of problems. I don't think that a thin wrapper around execve is a good
design for any language, including a shell.
Post by Jordan Abel
Post by l***@gmail.com
Well, that is a bit of antropomorphisation on my part. The shells
don't really talk to me. But according to Wikipedia "A macro language
is a programming language in which all or most computation is done by
expanding macros", which is very much true for shells. Variable
substitution, wildcard substitution, alias substitution and many other
language features work through macro expansion or an emulation of
macro expansion.
The shell doesn't do computation. it runs programs. I would say that
"all or most computation" is done not by expanding macros, but by
calling external programs.
From the CPU:s point of view, pretty much the only thing a shell ever
does is call fork. That is by far the most costly operation. But the
implementation of fork is provided by the OS, and is only one line of
code. From a shell limplementors point of view, a traditional shell is
a macro expander, since that is what most of the code you write does.
--
Axel
bsh
2006-03-07 02:55:06 UTC
Permalink
...
Thank you for another shell resource; I'm updating my record
of "fish" since the last posting to C.U.S. which I remember
well. Your points are well taken, even if bourne-like shell
scripting has only acceptably well "weathered" the subsequent
syntactic supersets applied to it over the many years.
Posix shells have no notion of local variables ...
I want to have noted that the modus operandi of POSIX is to
not mandate features, but minimal functionality. The distinction
for this instance is that local variable scopes can be provided
by a superset of the POSIX 1003.2 that does not break required
standards.
Quickly now, which of the following are legal function definitions?
hello () {echo hello }
hello () {echo hello;}
hello () {;echo hello }
hello () {;echo hello;}
hello () { echo hello }
hello () { echo hello;}
hello () { ;echo hello }
hello () { ;echo hello;}
... that is proof of a completely broken syntax.
(Number 4 should also work....)

This is completely untrue. Although the explanation is beyond
the scope of the discussion (and our patience...), the grouping
keyword ("}") is commensurate with the other terminating
keywords "esac", "done", and "fi", and as such require being on
their own line or after another keyword.

For instance, I've written scripts that end with one line:
"} esac done fi" which is documented syntax back to Sys6 sh(1).

Granted, the syntactic hoops that must be jumped through are
more than found in shells rc/akanga/es, because these are written
in lex/yacc. k/sh is written as a triple-pass recursive-descent
parser, with _separate_ parsing rules for variable-lists and
redirection-operators; however, the syntax is regular enough
that I have written a parser for k/sh(1) that accepts _all_
legal constructs, whatever the contextual quoting rules
(such as whether quote-removal occurs or not within parameter
expansion).

Incidentally, the fact that bash(1) allows
"[function] hello [()] { echo hello[;] }" is provided for the
programmer's convenience.
Fish supports universal variables....
Potential unmaintable code and intermittent bug alert!
Speaking for myself, I've found and used two methods
that allow the "migration" of the environment to-and-from
different processes/jobs, which is to say that the
attractiveness of scripting is that any given solution
might be had, even if it is not elegant.

=Brian
l***@gmail.com
2006-03-07 04:16:37 UTC
Permalink
Post by bsh
...
Thank you for another shell resource; I'm updating my record
of "fish" since the last posting to C.U.S. which I remember
well. Your points are well taken, even if bourne-like shell
scripting has only acceptably well "weathered" the subsequent
syntactic supersets applied to it over the many years.
Than you for your kind words. I'd say that many of the extensions
provided over the years have been implemented in rather inconsistent
ways. If functions, indirect variable expansion, arrays, local
variables and other new features had been better integrated into the
shell, the language would, in my opinion, have been in a much better
condition today.
Post by bsh
Posix shells have no notion of local variables ...
I want to have noted that the modus operandi of POSIX is to
not mandate features, but minimal functionality. The distinction
for this instance is that local variable scopes can be provided
by a superset of the POSIX 1003.2 that does not break required
standards.
I belive I mentioned that bash provides local variables in my original
post. But the fact that you have to explicitly declare each variable as
such increases the likelyhood slipups which lead to bugs. To support
this argument, one simply needs to look at some real-world scripts. I
can find lots of places where the 'local' builtin is not used when it
should be in the startup scripts of my fedora machine.
Post by bsh
Quickly now, which of the following are legal function definitions?
hello () {echo hello }
hello () {echo hello;}
hello () {;echo hello }
hello () {;echo hello;}
hello () { echo hello }
hello () { echo hello;}
hello () { ;echo hello }
hello () { ;echo hello;}
... that is proof of a completely broken syntax.
(Number 4 should also work....)
It doesn't in bash:

bash$ hello () {;echo hello;}
bash: syntax error near unexpected token `;'
Post by bsh
This is completely untrue. Although the explanation is beyond
the scope of the discussion (and our patience...), the grouping
keyword ("}") is commensurate with the other terminating
keywords "esac", "done", and "fi", and as such require being on
their own line or after another keyword.
If I understand you correctly, your point is that '}' should be viewed
as a command-like keyword, like 'esac' and 'fi'. And if you do so, it
will indeed seem more intuitive. But that is beside the point for me,
since one would not expect '}' to behave that way because:

* The syntax is clearly borrowed/inspired by from C-like languages
where } does not behave like a function or reserved word.
* One does not expect alphabetical and non-alphabetical keywords to
follow the same rules. Notice that none of '&', '&&', '|', ';', ';;',
'`' and ')' require a preceeding newline or other such niceties while
'fi', 'done', 'esac' and friends do. The only exceptions I can think of
are '{' and '}'.
* The syntax strongly resembles () and to some degree also $(), i.e.
subshells and command substitutions, neither of which use ')' as a
regular command.

To me, the latter is a _very_ solid reson to expect '}' to behave in a
very different way. This signals that the syntax is tacked on as an
afterthought, rather than a well thought out design. And unless I'm
mistaken, this syntax was borrowed from rc, so this impression is
indeed to some extent correct.
Post by bsh
"} esac done fi" which is documented syntax back to Sys6 sh(1).
Yes it seems bash accepts sequences like that. As I would expect it to.
Post by bsh
Granted, the syntactic hoops that must be jumped through are
more than found in shells rc/akanga/es, because these are written
in lex/yacc. k/sh is written as a triple-pass recursive-descent
parser, with _separate_ parsing rules for variable-lists and
redirection-operators; however, the syntax is regular enough
that I have written a parser for k/sh(1) that accepts _all_
legal constructs, whatever the contextual quoting rules
(such as whether quote-removal occurs or not within parameter
expansion).
I did not know ksh needs three parsing passes. The fish parser uses
only a single pass, and the parser is <3000 lines long, with another
<1000 for the tokenizer. I have tried to make the fish syntax easy to
parse, partly based on the assumption that if it easy for the computer
to parse something, it is probably easy for a human. While this
assumption is not always correct, I think it applies here. Fish
provides much fewer features than other shells, but the features are
more orthogonal and designed to still give you the same expressive
power.
Post by bsh
Incidentally, the fact that bash(1) allows
"[function] hello [()] { echo hello[;] }" is provided for the
programmer's convenience.
I'm not following you here.
Post by bsh
Fish supports universal variables....
Potential unmaintable code and intermittent bug alert!
Speaking for myself, I've found and used two methods
that allow the "migration" of the environment to-and-from
different processes/jobs, which is to say that the
attractiveness of scripting is that any given solution
might be had, even if it is not elegant.
I was actually surprised at how little code was required for this, less
than 2000 lines of code, with a fair number of comments in them. And
the code does not use anything more advanced than standard Unix
sockets, either. Hopefully this removes most of your fears about
unmaintable code and intermittent bugs.

I interpret the rest of your comment as a preference towards allowing
multiple unelegant solutions rather than a single, elegent genreal
purpose one. Not that I agree, but if that is your opinion, there is
nothing preventing you from using some other method as well for sharing
variable values. That fact that fish has universal variables does not
force you to use them, and nor does it make it impossible to implement
alternative schemes for doing the same thing.
Post by bsh
=Brian
--
Axel
Jordan Abel
2006-03-07 05:35:39 UTC
Permalink
Post by l***@gmail.com
If I understand you correctly, your point is that '}' should be viewed
as a command-like keyword, like 'esac' and 'fi'. And if you do so, it
will indeed seem more intuitive. But that is beside the point for me,
* The syntax is clearly borrowed/inspired by from C-like languages
where } does not behave like a function or reserved word.
It might have been better, yes, for something like "begin" and "end" to
be used, particularly given how enamored Bourne otherwise was with
ALGOL (when you #define BEGIN { #define END ;} in the shell's _source_,
you ought to be using those keywords for the language defined by the
shell)
l***@gmail.com
2006-03-07 12:48:55 UTC
Permalink
Post by Jordan Abel
Post by l***@gmail.com
If I understand you correctly, your point is that '}' should be viewed
as a command-like keyword, like 'esac' and 'fi'. And if you do so, it
will indeed seem more intuitive. But that is beside the point for me,
* The syntax is clearly borrowed/inspired by from C-like languages
where } does not behave like a function or reserved word.
It might have been better, yes, for something like "begin" and "end" to
be used, particularly given how enamored Bourne otherwise was with
ALGOL (when you #define BEGIN { #define END ;} in the shell's _source_,
you ought to be using those keywords for the language defined by the
shell)
I was under the impression that the function syntax was not part of the
original Bourne shell, but rather an addition originating from rc. But
I may be wrong.

And as you say, a keyword like 'end' might be more suitable, which is
exactly why fish uses the keyword 'end'.
--
Axel
Stephane Chazelas
2006-03-07 16:04:35 UTC
Permalink
On 7 Mar 2006 04:48:55 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Jordan Abel
It might have been better, yes, for something like "begin" and "end" to
be used, particularly given how enamored Bourne otherwise was with
ALGOL (when you #define BEGIN { #define END ;} in the shell's _source_,
you ought to be using those keywords for the language defined by the
shell)
I was under the impression that the function syntax was not part of the
original Bourne shell, but rather an addition originating from rc. But
I may be wrong.
[...]

Not from rc. I think rc dates back to the early nineties while
the Bourne shell has had functions since the early eighties. ksh
had function before the Bourne shell, though but using a
different syntax (function foo { ...; }).
--
Stephane
Bruce Barnett
2006-03-07 19:46:15 UTC
Permalink
Post by Stephane Chazelas
Not from rc. I think rc dates back to the early nineties while
the Bourne shell has had functions since the early eighties. ksh
had function before the Bourne shell, though but using a
different syntax (function foo { ...; }).
My System V manual (from DEC 1984) describes sh functions.
My 4.3bsd manual (1986) doesn't. (7th Edition Unix).
--
Sending unsolicited commercial e-mail to this account incurs a fee of
$500 per message, and acknowledges the legality of this contract.
l***@gmail.com
2006-03-07 23:37:19 UTC
Permalink
Post by Bruce Barnett
Post by Stephane Chazelas
Not from rc. I think rc dates back to the early nineties while
the Bourne shell has had functions since the early eighties. ksh
had function before the Bourne shell, though but using a
different syntax (function foo { ...; }).
My System V manual (from DEC 1984) describes sh functions.
My 4.3bsd manual (1986) doesn't. (7th Edition Unix).
Thanks for taking the time to look this up. One can guess then that
functions are a SysV invention, though there are of course other
possibilities. The lack of functions in 4.3bsd would at least imply
that they are not a part of the original Bourne shell, which was my
main point.
Post by Bruce Barnett
--
Sending unsolicited commercial e-mail to this account incurs a fee of
$500 per message, and acknowledges the legality of this contract.
--
Axel
Sven Mascheck
2006-03-08 01:28:50 UTC
Permalink
Post by l***@gmail.com
Post by Bruce Barnett
ksh had function before the Bourne shell, though but using a
different syntax (function foo { ...; }).
My System V manual (from DEC 1984) describes sh functions.
My 4.3bsd manual (1986) doesn't. (7th Edition Unix).
Thanks for taking the time to look this up. One can guess then that
functions are a SysV invention,
No need to guess. The Bourne shell introduced functions with
SVR2 ('84). The Korn shell might have been earlier but AFAIK
it was not widely available until ksh86 or even ksh88.
--
<http://www.in-ulm.de/~mascheck/bourne/>
Stephane CHAZELAS
2006-03-08 08:13:51 UTC
Permalink
Post by Sven Mascheck
Post by l***@gmail.com
Post by Bruce Barnett
ksh had function before the Bourne shell, though but using a
different syntax (function foo { ...; }).
My System V manual (from DEC 1984) describes sh functions.
My 4.3bsd manual (1986) doesn't. (7th Edition Unix).
Thanks for taking the time to look this up. One can guess then that
functions are a SysV invention,
No need to guess. The Bourne shell introduced functions with
SVR2 ('84). The Korn shell might have been earlier but AFAIK
it was not widely available until ksh86 or even ksh88.
There's that David Korn article on your web page that talks
about it:

http://www.in-ulm.de/~mascheck/bourne/korn.html

He says functions were added to the Bourne shell in 1982.
--
Stéphane
Sven Mascheck
2006-03-08 13:09:47 UTC
Permalink
Post by Stephane CHAZELAS
The Bourne shell introduced functions with SVR2 ('84).
There's that David Korn article on your web page that talks
http://www.in-ulm.de/~mascheck/bourne/korn.html
He says functions were added to the Bourne shell in 1982.
That might have been internal work at that time, or he just
misremebered it. But SVR2 was _released_ in 84, and SVR1
definitely had no functions yet.
Stephane CHAZELAS
2006-03-07 08:46:53 UTC
Permalink
2006-03-6, 20:16(-08), ***@gmail.com:
[...]
Post by l***@gmail.com
Than you for your kind words. I'd say that many of the extensions
provided over the years have been implemented in rather inconsistent
ways. If functions, indirect variable expansion, arrays, local
variables and other new features had been better integrated into the
shell, the language would, in my opinion, have been in a much better
condition today.
I couldn't agree more.

[...]
Post by l***@gmail.com
Post by bsh
This is completely untrue. Although the explanation is beyond
the scope of the discussion (and our patience...), the grouping
keyword ("}") is commensurate with the other terminating
keywords "esac", "done", and "fi", and as such require being on
their own line or after another keyword.
If I understand you correctly, your point is that '}' should be viewed
as a command-like keyword, like 'esac' and 'fi'. And if you do so, it
will indeed seem more intuitive. But that is beside the point for me,
* The syntax is clearly borrowed/inspired by from C-like languages
where } does not behave like a function or reserved word.
[...]

That could be a backward portability issue.

echo }

outputs "}" (it's not the case in zsh except in sh/ksh modes).
--
Stéphane
l***@gmail.com
2006-03-07 14:18:17 UTC
Permalink
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Than you for your kind words. I'd say that many of the extensions
provided over the years have been implemented in rather inconsistent
ways. If functions, indirect variable expansion, arrays, local
variables and other new features had been better integrated into the
shell, the language would, in my opinion, have been in a much better
condition today.
I couldn't agree more.
Glad to hear we agree here. So, aside from the 'FOO=BAR echo
$FOO'-question, where we seem to see things differently, what are your
feelings on e.g. the fish array syntax?
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Post by bsh
This is completely untrue. Although the explanation is beyond
the scope of the discussion (and our patience...), the grouping
keyword ("}") is commensurate with the other terminating
keywords "esac", "done", and "fi", and as such require being on
their own line or after another keyword.
If I understand you correctly, your point is that '}' should be viewed
as a command-like keyword, like 'esac' and 'fi'. And if you do so, it
will indeed seem more intuitive. But that is beside the point for me,
* The syntax is clearly borrowed/inspired by from C-like languages
where } does not behave like a function or reserved word.
[...]
That could be a backward portability issue.
echo }
outputs "}" (it's not the case in zsh except in sh/ksh modes).
As you mentioned in another post, zsh is much more clever in it's
parsing of ';' and '}'. It may be that there are some awkward edge
cases where there is a clash between braces used to denote blocks and
braces used fo brace expansion, but overall the zsh syntax makes _much_
more sense to me. That said, there is of course still the issue of
using all these keywords for denoting end-of-block:

fi, esac, done, }, ;;

When testing around in zsh, the only thing that I felt should be
allowed which wasn't was the use of multiple ';' without any whitespace
between them. It seems that ';;' is interpreted as the case-end
keyword, and as such will lead to a parse error. That means that one
can legally write an empty command like this:

;

But one can't write two empty commands on a single line like this:

;;

which is of course a very minor detail all things considered. I mention
this mostly since I think that it makes a lot of sense from a language
point of view to minimize the difference between a ';' and a newline.
It is legal to have any number of newlines with no whitespace, so from
that perspective it makes sense to do the same with ';'. This is an
exact parallel to how spaces, tabs and newlines are exactly equivalent
outside of string literals in C. In fish ';' and newlines are
syntactically equivalent in none-quoted contexts, so you can write
';;;;;;;;;' and fish will gladly do nothing at all.
Post by Stephane CHAZELAS
--
Stéphane
--
Axel
Stephane Chazelas
2006-03-07 15:17:19 UTC
Permalink
On 7 Mar 2006 06:18:17 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
As you mentioned in another post, zsh is much more clever in it's
parsing of ';' and '}'. It may be that there are some awkward edge
cases where there is a clash between braces used to denote blocks and
braces used fo brace expansion, but overall the zsh syntax makes _much_
more sense to me. That said, there is of course still the issue of
fi, esac, done, }, ;;
You can use the shorter forms if you don't like the longer ones.
I use the shorter forms at the prompt:

if [[ a = b ]] {echo yes} else {echo no}

for f (1 2 3) echo $f

while ((i++ < 5)) echo $i

repeat 10 echo $((++i))

case a {(b) echo yes;; (*) echo no;;}
--
Stephane
Markus Gyger
2006-03-08 10:19:06 UTC
Permalink
Post by l***@gmail.com
I mention
this mostly since I think that it makes a lot of sense from a language
point of view to minimize the difference between a ';' and a newline.
Although it doesn't have to be defined this way -- see e.g.
JavaScript[1], where most semicolons can be omitted.

[1] Standard ECMA-262: ECMAScript Language Specification
7.9.1 Rules of Automatic Semicolon Insertion
http://www.ecma-international.org/publications/standards/Ecma-262.htm
[I have to admit that some things in the standard are not
too obvious at first -- like e.g. that property accessors
can be used to mimic associative arrays (hashes).]


Markus
l***@gmail.com
2006-03-08 10:47:20 UTC
Permalink
Post by Markus Gyger
Post by l***@gmail.com
I mention
this mostly since I think that it makes a lot of sense from a language
point of view to minimize the difference between a ';' and a newline.
Although it doesn't have to be defined this way -- see e.g.
JavaScript[1], where most semicolons can be omitted.
I wouldn't use JavaScript as an example if a very good language. Most
sane languages try to make as little difference as possible between
diffferent types of whitespace and different types of command
delimiters.
Post by Markus Gyger
[1] Standard ECMA-262: ECMAScript Language Specification
7.9.1 Rules of Automatic Semicolon Insertion
http://www.ecma-international.org/publications/standards/Ecma-262.htm
[I have to admit that some things in the standard are not
too obvious at first -- like e.g. that property accessors
can be used to mimic associative arrays (hashes).]
Markus
--
Axel
Stephane CHAZELAS
2006-03-07 08:40:50 UTC
Permalink
2006-03-6, 18:55(-08), bsh:
[...]
Post by bsh
Posix shells have no notion of local variables ...
I want to have noted that the modus operandi of POSIX is to
not mandate features, but minimal functionality. The distinction
for this instance is that local variable scopes can be provided
by a superset of the POSIX 1003.2 that does not break required
standards.
But then a POSIX script can't use them as the behavior is
unspecified. There's no need to have a POSIX conformant shell if
it's to write non-POSIX conformant scripts. That doesn't help
with portability.

I find that a shell can't be at the same time a good shell and a
good programming language. There are conflicting purposes and
means between both (the fact that a shell has to be human
oriented, for instance). So, I find it acceptable that it has
limited programming capabilities, but I find it important that
there exists a standard that still allows to write portable
shell programs as those can reveal useful sometimes.
Post by bsh
Quickly now, which of the following are legal function definitions?
hello () {echo hello }
hello () {echo hello;}
hello () {;echo hello }
hello () {;echo hello;}
hello () { echo hello }
hello () { echo hello;}
hello () { ;echo hello }
hello () { ;echo hello;}
... that is proof of a completely broken syntax.
(Number 4 should also work....)
Well "{" is supposed to be followed by a command there, just as
"then".
Post by bsh
This is completely untrue. Although the explanation is beyond
the scope of the discussion (and our patience...), the grouping
keyword ("}") is commensurate with the other terminating
keywords "esac", "done", and "fi", and as such require being on
their own line or after another keyword.
Note that it's not the case of zsh, where

hello() {echo hello}

is valid.

[...]
Post by bsh
Granted, the syntactic hoops that must be jumped through are
more than found in shells rc/akanga/es, because these are written
in lex/yacc. k/sh is written as a triple-pass recursive-descent
parser, with _separate_ parsing rules for variable-lists and
redirection-operators; however, the syntax is regular enough
that I have written a parser for k/sh(1) that accepts _all_
legal constructs, whatever the contextual quoting rules
(such as whether quote-removal occurs or not within parameter
expansion).
That sounds great, have you made it available?
--
Stéphane
bsh
2006-03-07 22:46:20 UTC
Permalink
Post by Stephane CHAZELAS
[...]
Post by bsh
Posix shells have no notion of local variables ...
I want to have noted that the modus operandi of POSIX is to
not mandate features, but minimal functionality. The distinction
for this instance is that local variable scopes can be provided
by a superset of the POSIX 1003.2 that does not break required
standards.
But then a POSIX script can't use them as the behavior is
unspecified. There's no need to have a POSIX conformant shell if
it's to write non-POSIX conformant scripts. That doesn't help
with portability.
There's a POSIX-specified difference between POSIX _compliant_
and POSIX _conformant_ -- are you aware of this? Compliance
indicates a strict accommodation to the standard; comformance
presumably allows for a superset of functionality.
Post by Stephane CHAZELAS
Note that it's not the case of zsh, where
hello() {echo hello}
zsh(1) is kind of an oddball, with the impression that the author
was removing the goofier elements of k/sh syntax, without realising
the historical context for why they existed in the first place. For
at least the casual zsh(1) scripter, this has not served to hurt
the language at all -- just the opposite. But with a background of
language design, and having laboriously delved into the bowels
of the k/sh(1) syntax, I can say that the ultimate salvation of
any language is consistency, even if in the case of ksh(1)
consistency has brought about some decisions which seem
odd (like the issue of "{" and "}" above) but MUST be so to
maintain the subtler semantics of the language and still
remain a superset (that is, ksh(1) runs 95% of sh(1) scripts).
(Of course for a new language, such decisions do not apply;
but then I am wondering, what does the language have to offer
more or better than existing scripting languages?)

Because I have studied the grammar so extensively, it does
neither confuse me or dismay me; _however_, the hardest
thing to understand well is the fact that parsing and quoting
rules vary by the shell _context_. For instance:

$ print ${var:=this & that} # param expansion is intrinsically quoted
$ (( var+=1 )) # arithmetric "context" within "let" statement. (no
"$var")
$ shift var # shift parses argument as arithmetric expression

There is a table in B&K for the selection and order(!) of the
various parsing contexts for the many kinds constructs, and
there are _many_ footnotes....

sh(1) only had one parsing context that you would have needed
to read the documentation to know about: the argument to
"case" NEVER needs to be quoted. It is intrinsically quoted:

var='text with spaces'
case $var in ... # a convenience, but still needing memorization
Post by Stephane CHAZELAS
Post by bsh
however, the syntax is regular enough
that I have written a parser for k/sh(1) that accepts _all_
legal constructs, whatever the contextual quoting rules
(such as whether quote-removal occurs or not within parameter
expansion).
That sounds great, have you made it available?
I accidentally omitted the fact that the scanner/parser is
written in (old) awk(1) and (old) sed(1), which was to indicate
that any language that is possible to parse in such languages
must be at least fairly regular!

I have made it selectively available to beta testers over the
years, but as Zdenek Sekera will attest, my code base is
currently unavailable.... :(

It is rather a small subsystem of a much more ambitious
IDE and function library for k/sh, which I will call SIDE and
publish eventually as open-source code.

=Brian
Chris F.A. Johnson
2006-03-07 23:03:04 UTC
Permalink
On 2006-03-07, bsh wrote:
...
Post by bsh
Because I have studied the grammar so extensively, it does
neither confuse me or dismay me; _however_, the hardest
thing to understand well is the fact that parsing and quoting
$ print ${var:=this & that} # param expansion is intrinsically quoted
It is? Try this:

printf "%s\n" ${var:=this & that}
Post by bsh
$ (( var+=1 )) # arithmetric "context" within "let" statement. (no
"$var")
That is not POSIX.
Post by bsh
$ shift var # shift parses argument as arithmetric expression
Is that a ksh peculiarity?

$ shift 1+2
bash: shift: 1+2: numeric argument required
--
Chris F.A. Johnson, author | <http://cfaj.freeshell.org>
Shell Scripting Recipes: | My code in this post, if any,
A Problem-Solution Approach | is released under the
2005, Apress | GNU General Public Licence
Stephane CHAZELAS
2006-03-08 08:09:10 UTC
Permalink
Post by Chris F.A. Johnson
...
Post by bsh
Because I have studied the grammar so extensively, it does
neither confuse me or dismay me; _however_, the hardest
thing to understand well is the fact that parsing and quoting
$ print ${var:=this & that} # param expansion is intrinsically quoted
printf "%s\n" ${var:=this & that}
That's word splitting involved here.

Try:

IFS=; printf "%s\n" ${var:=this & that}
Post by Chris F.A. Johnson
Post by bsh
$ (( var+=1 )) # arithmetric "context" within "let" statement. (no
"$var")
That is not POSIX.
Post by bsh
$ shift var # shift parses argument as arithmetric expression
Is that a ksh peculiarity?
Yes, I think Brian was speaking of ksh in particular there.
Post by Chris F.A. Johnson
$ shift 1+2
bash: shift: 1+2: numeric argument required
--
Stéphane
l***@gmail.com
2006-03-07 23:23:13 UTC
Permalink
Post by bsh
Post by Stephane CHAZELAS
[...]
Post by bsh
Posix shells have no notion of local variables ...
I want to have noted that the modus operandi of POSIX is to
not mandate features, but minimal functionality. The distinction
for this instance is that local variable scopes can be provided
by a superset of the POSIX 1003.2 that does not break required
standards.
But then a POSIX script can't use them as the behavior is
unspecified. There's no need to have a POSIX conformant shell if
it's to write non-POSIX conformant scripts. That doesn't help
with portability.
There's a POSIX-specified difference between POSIX _compliant_
and POSIX _conformant_ -- are you aware of this? Compliance
indicates a strict accommodation to the standard; comformance
presumably allows for a superset of functionality.
Post by Stephane CHAZELAS
Note that it's not the case of zsh, where
hello() {echo hello}
zsh(1) is kind of an oddball, with the impression that the author
was removing the goofier elements of k/sh syntax, without realising
the historical context for why they existed in the first place. For
at least the casual zsh(1) scripter, this has not served to hurt
the language at all -- just the opposite. But with a background of
language design, and having laboriously delved into the bowels
of the k/sh(1) syntax, I can say that the ultimate salvation of
any language is consistency, even if in the case of ksh(1)
consistency has brought about some decisions which seem
odd (like the issue of "{" and "}" above) but MUST be so to
maintain the subtler semantics of the language and still
remain a superset (that is, ksh(1) runs 95% of sh(1) scripts).
(Of course for a new language, such decisions do not apply;
but then I am wondering, what does the language have to offer
more or better than existing scripting languages?)
In the case of fish, these are a few of the things offered:

* Support for more useful variable scoping rules as described in my
original post.
* Universal variables as described in my original post.
* A generic event syntax as described in my original post.
* Better error reporting as described in my original post
* Drop the cruft you describe. No more strange annoying quirks.
* Syntax highlighting when in interactive mode
* Advanced tab completions, including useful descriptions. For example,
type 'man wcs<TAB>' and you'll be presented with a listing of all
manual pages beginning with 'wcs' and a short summary of what each
page contains.
* X clipboard integration. ^K moves the rest of the line to the X
clipboard, ^Y pastes from the clipboard, etc. (If X is not running, an
internal fallback is used)
* Simple integrated history search - type in a search string into the
prompt and press the up arrow to search for the specified string in the
history. Use Meta-up to search for a token and not a whole line.

There are many, many other changes. A large portion of all fish
features is user interface polish. I meant for this discussion to be
about shell syntax, but I'd be happy to discuss user interface issues
as well.

[...]
Post by bsh
=Brian
--
Axel
Jordan Abel
2006-03-08 00:37:40 UTC
Permalink
Post by l***@gmail.com
Post by bsh
Post by Stephane CHAZELAS
[...]
Post by bsh
Posix shells have no notion of local variables ...
I want to have noted that the modus operandi of POSIX is to
not mandate features, but minimal functionality. The distinction
for this instance is that local variable scopes can be provided
by a superset of the POSIX 1003.2 that does not break required
standards.
But then a POSIX script can't use them as the behavior is
unspecified. There's no need to have a POSIX conformant shell if
it's to write non-POSIX conformant scripts. That doesn't help
with portability.
There's a POSIX-specified difference between POSIX _compliant_
and POSIX _conformant_ -- are you aware of this? Compliance
indicates a strict accommodation to the standard; comformance
presumably allows for a superset of functionality.
Post by Stephane CHAZELAS
Note that it's not the case of zsh, where
hello() {echo hello}
zsh(1) is kind of an oddball, with the impression that the author
was removing the goofier elements of k/sh syntax, without realising
the historical context for why they existed in the first place. For
at least the casual zsh(1) scripter, this has not served to hurt
the language at all -- just the opposite. But with a background of
language design, and having laboriously delved into the bowels
of the k/sh(1) syntax, I can say that the ultimate salvation of
any language is consistency, even if in the case of ksh(1)
consistency has brought about some decisions which seem
odd (like the issue of "{" and "}" above) but MUST be so to
maintain the subtler semantics of the language and still
remain a superset (that is, ksh(1) runs 95% of sh(1) scripts).
(Of course for a new language, such decisions do not apply;
but then I am wondering, what does the language have to offer
more or better than existing scripting languages?)
* Support for more useful variable scoping rules as described in my
original post.
* Universal variables as described in my original post.
zsh has dynamic scope. lexical scope can be overly restrictive.
Post by l***@gmail.com
* A generic event syntax as described in my original post.
I didn't read the original post all the way through. what are "events",
though?
Post by l***@gmail.com
* Better error reporting as described in my original post
* Drop the cruft you describe. No more strange annoying quirks.
It's only strange and annoying when you don't know the syntax
Post by l***@gmail.com
* Syntax highlighting when in interactive mode
Nice.
Post by l***@gmail.com
* Advanced tab completions, including useful descriptions. For example,
type 'man wcs<TAB>' and you'll be presented with a listing of all
manual pages beginning with 'wcs' and a short summary of what each
page contains.
Zsh does this.
Post by l***@gmail.com
* X clipboard integration. ^K moves the rest of the line to the X
clipboard, ^Y pastes from the clipboard, etc. (If X is not running, an
internal fallback is used)
You haven't said whether you do this - are multiple cuts without cursor
movement treated as one? [most other applications do this, i.e.]

word1 word2_word3 word4

^K^W will put "word2 word3 word4" in the cut buffer. the cursor location
is underlined

What would be nice would be integration with the _screen_ clipboard -
send \e]83;paste .\a to paste, and \e]83;readbuf [tmpfile]\a to copy.
you could use the screen-exchange file, too.
Post by l***@gmail.com
* Simple integrated history search - type in a search string into the
prompt and press the up arrow to search for the specified string in the
history. Use Meta-up to search for a token and not a whole line.
that sounds like vim's way - anything wrong with the i-search feature in
other shells?
Post by l***@gmail.com
There are many, many other changes. A large portion of all fish
features is user interface polish. I meant for this discussion to be
about shell syntax, but I'd be happy to discuss user interface issues
as well.
l***@gmail.com
2006-03-08 10:37:47 UTC
Permalink
Post by Jordan Abel
Post by l***@gmail.com
Post by bsh
Post by Stephane CHAZELAS
[...]
Post by bsh
Posix shells have no notion of local variables ...
I want to have noted that the modus operandi of POSIX is to
not mandate features, but minimal functionality. The distinction
for this instance is that local variable scopes can be provided
by a superset of the POSIX 1003.2 that does not break required
standards.
But then a POSIX script can't use them as the behavior is
unspecified. There's no need to have a POSIX conformant shell if
it's to write non-POSIX conformant scripts. That doesn't help
with portability.
There's a POSIX-specified difference between POSIX _compliant_
and POSIX _conformant_ -- are you aware of this? Compliance
indicates a strict accommodation to the standard; comformance
presumably allows for a superset of functionality.
Post by Stephane CHAZELAS
Note that it's not the case of zsh, where
hello() {echo hello}
zsh(1) is kind of an oddball, with the impression that the author
was removing the goofier elements of k/sh syntax, without realising
the historical context for why they existed in the first place. For
at least the casual zsh(1) scripter, this has not served to hurt
the language at all -- just the opposite. But with a background of
language design, and having laboriously delved into the bowels
of the k/sh(1) syntax, I can say that the ultimate salvation of
any language is consistency, even if in the case of ksh(1)
consistency has brought about some decisions which seem
odd (like the issue of "{" and "}" above) but MUST be so to
maintain the subtler semantics of the language and still
remain a superset (that is, ksh(1) runs 95% of sh(1) scripts).
(Of course for a new language, such decisions do not apply;
but then I am wondering, what does the language have to offer
more or better than existing scripting languages?)
* Support for more useful variable scoping rules as described in my
original post.
* Universal variables as described in my original post.
zsh has dynamic scope. lexical scope can be overly restrictive.
But if you simply create a new variable, it will be global by default,
which is the pert the has me worried.
Post by Jordan Abel
Post by l***@gmail.com
* A generic event syntax as described in my original post.
I didn't read the original post all the way through. what are "events",
though?
How about reading that bit of my original post to find out?
Post by Jordan Abel
Post by l***@gmail.com
* Better error reporting as described in my original post
* Drop the cruft you describe. No more strange annoying quirks.
It's only strange and annoying when you don't know the syntax
Look at the subconversation in this thread about 'printf "%s\n"
${var:=this & that}'. I would not say that bsh, Chris F.A. Johnson and
Stephane Chazelas don't know the syntax, would you? But the fact is,
there are so many subtle ambiguties that even they occasionally get
the details wrong. (Or at least one of them, since they don't agree)
Post by Jordan Abel
Post by l***@gmail.com
* Syntax highlighting when in interactive mode
Nice.
Post by l***@gmail.com
* Advanced tab completions, including useful descriptions. For example,
type 'man wcs<TAB>' and you'll be presented with a listing of all
manual pages beginning with 'wcs' and a short summary of what each
page contains.
Zsh does this.
At least zsh 4.2.1 doesn't. If you complete 'ls -' it will show you
switches to ls with descriptions, just like fish. But at least my
version of zsh (using compinit) doesn't do any of the following:

* Provide the whatis entry as the description for manual page
completions
* Provide the whatis entry as the description for command name
completions
* Provide the full name as the description when completing a username
* Correctly complete strings containing braces, e.g. 'rm
{.,backup}/INST<tab>' will not work
* Correctly complete switches when multiple short switches are grouped
together, like 'tar -zx<TAB>'
* Correctly support gnu-style switches that accept arguments but don't
require them, like ls --color. It always completes with '--color=',
even though '--color' would also be a correct completion
Post by Jordan Abel
Post by l***@gmail.com
* X clipboard integration. ^K moves the rest of the line to the X
clipboard, ^Y pastes from the clipboard, etc. (If X is not running, an
internal fallback is used)
You haven't said whether you do this - are multiple cuts without cursor
movement treated as one? [most other applications do this, i.e.]
word1 word2_word3 word4
^K^W will put "word2 word3 word4" in the cut buffer. the cursor location
is underlined
Not yet. Hadn't though of that one. Will add it though.
Post by Jordan Abel
What would be nice would be integration with the _screen_ clipboard -
send \e]83;paste .\a to paste, and \e]83;readbuf [tmpfile]\a to copy.
you could use the screen-exchange file, too.
That sounds very useful. Is there some place I can read up on how the
screen clipboard works?
Post by Jordan Abel
Post by l***@gmail.com
* Simple integrated history search - type in a search string into the
prompt and press the up arrow to search for the specified string in the
history. Use Meta-up to search for a token and not a whole line.
that sounds like vim's way - anything wrong with the i-search feature in
other shells?
Yes.

* If you've typed something and realise that you may have already
written that before, you can search on what you've already typed
* Fish combines the history moving with the history search, so there
are fewer things to learn
* Fish highlights the search match, making it easier to tell if you've
found the right command
* Fish allwos you to search for a specific token instead of a whole
command
--
Axel
Stephane Chazelas
2006-03-08 11:10:43 UTC
Permalink
On 8 Mar 2006 02:37:47 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Jordan Abel
zsh has dynamic scope. lexical scope can be overly restrictive.
But if you simply create a new variable, it will be global by default,
which is the pert the has me worried.
Same in awk, perl and many languages. It looks more intuitive
(familiar) to me to have to tell it when I want to limit the
scope.

[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
* Advanced tab completions, including useful descriptions. For example,
type 'man wcs<TAB>' and you'll be presented with a listing of all
manual pages beginning with 'wcs' and a short summary of what each
page contains.
Zsh does this.
At least zsh 4.2.1 doesn't. If you complete 'ls -' it will show you
switches to ls with descriptions, just like fish. But at least my
* Provide the whatis entry as the description for manual page
completions
* Provide the whatis entry as the description for command name
completions
These are nice features. Note that zsh has all the necessary
framework to enable you to do the same without having to modify
zsh's code. I would bet zsh supports many more such features
than fish given that new command supports have been added to it
for about 10 years, though.

One of the very few critics of zsh I've heard of is that when
you use all of its features, it can get very bloated and
resource consuming. If you hold a cache of all the whatis or
user database in memory, you'll run into similar problems.
Post by l***@gmail.com
* Provide the full name as the description when completing a username
* Correctly complete strings containing braces, e.g. 'rm
{.,backup}/INST<tab>' will not work
You can't expect that to work with zsh, the way it is. zsh
completes one thing at a time, {..,..} is not some sort of
globbing, it is expanded very early (before any other things),
it doesn't make much sense to complete there as you can't do
anything reliable.
Post by l***@gmail.com
* Correctly complete switches when multiple short switches are grouped
together, like 'tar -zx<TAB>'
What do you mean? That's not the correct tar syntax BTW.
Post by l***@gmail.com
* Correctly support gnu-style switches that accept arguments but don't
require them, like ls --color. It always completes with '--color=',
even though '--color' would also be a correct completion
zsh supports them

If you type:

ls --co<tab><space>

or

ls --co<tab><enter>

zsh will remove the "="

[...]
Post by l***@gmail.com
Post by Jordan Abel
What would be nice would be integration with the _screen_ clipboard -
send \e]83;paste .\a to paste, and \e]83;readbuf [tmpfile]\a to copy.
you could use the screen-exchange file, too.
That sounds very useful. Is there some place I can read up on how the
screen clipboard works?
info screen

All that is easily done with zsh (see
http://stchaz.free.fr/mouse.zsh for the X clipboard support)
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
* Simple integrated history search - type in a search string into the
prompt and press the up arrow to search for the specified string in the
history. Use Meta-up to search for a token and not a whole line.
see also the "predict" mode in zsh.
Post by l***@gmail.com
Post by Jordan Abel
that sounds like vim's way - anything wrong with the i-search feature in
other shells?
Yes.
* If you've typed something and realise that you may have already
written that before, you can search on what you've already typed
<Esc-p> in zsh?
Post by l***@gmail.com
* Fish combines the history moving with the history search, so there
are fewer things to learn
* Fish highlights the search match, making it easier to tell if you've
found the right command
That's neat.
--
Stephane
l***@gmail.com
2006-03-08 12:17:47 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
zsh has dynamic scope. lexical scope can be overly restrictive.
But if you simply create a new variable, it will be global by default,
which is the pert the has me worried.
Same in awk, perl and many languages. It looks more intuitive
(familiar) to me to have to tell it when I want to limit the
scope.
I disagree. Most variables that you use inside functions should be
function local in my experience. And most of those that shouldn't,
should be exported as well, anyway.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
* Advanced tab completions, including useful descriptions. For example,
type 'man wcs<TAB>' and you'll be presented with a listing of all
manual pages beginning with 'wcs' and a short summary of what each
page contains.
Zsh does this.
At least zsh 4.2.1 doesn't. If you complete 'ls -' it will show you
switches to ls with descriptions, just like fish. But at least my
* Provide the whatis entry as the description for manual page
completions
* Provide the whatis entry as the description for command name
completions
These are nice features. Note that zsh has all the necessary
framework to enable you to do the same without having to modify
zsh's code. I would bet zsh supports many more such features
than fish given that new command supports have been added to it
for about 10 years, though.
Probably. Fish has command specific completions for ~150 commands. I
tried out a few commands and noticed that zsh lacks completions for
darcs, su, yum and most subcommands of apt, while fish lacks
completions for telnet, rcp and finger. It would seem to me that zsh
has completions for many old-school commands that fish lacks, while
fish has completion support for a few newer commands the zsh lacks. It
is possible that there is a new zsh release featuring support for many
more commands, I'm using 4.2.1.
Post by Stephane Chazelas
One of the very few critics of zsh I've heard of is that when
you use all of its features, it can get very bloated and
resource consuming. If you hold a cache of all the whatis or
user database in memory, you'll run into similar problems.
Fish autoloads everything on first use. The first time you use a
shellscript function, that function is loaded. The first time you
complete a command, the completions for that command are loaded. The
first time you access the commandline history, the history file is
loaded. These incremental loads are so small that they aren't noticable
on my 300MHz Speed deamon unless the disk is spinned down. There is an
issue with memory use, however, since nothing ever gets unloaded. This
issue is amplified by the fact that fish internally uses wide character
strings, so most text is quadrupled in size. I just used massigf to
tell me the fish memory usage, and it claims fish uses ~100 kB on
startup and ~1.6 MB if you manually load _everything_. I think massif
is doing something wrng however, the memory usage profile looks funny.
My guess would be that fish uses ~150 kB on startup. I find that my
fish sessions usually pan out at 300-400 kB of memory, which seems to
be what zsh uses on startup.

I plan to implement unloading of functions and completions that haven't
been used in a long time to make this problem go away.

Fish is also a bit slower than other shells on some types of scripts,
since it doesn't implement a huge number of commands as builtins. E.g.
time, pwd, kill, printf and echo are use standard unix commands, not
builtins.
Post by Stephane Chazelas
Post by l***@gmail.com
* Provide the full name as the description when completing a username
* Correctly complete strings containing braces, e.g. 'rm
{.,backup}/INST<tab>' will not work
You can't expect that to work with zsh, the way it is. zsh
completes one thing at a time, {..,..} is not some sort of
globbing, it is expanded very early (before any other things),
it doesn't make much sense to complete there as you can't do
anything reliable.
I can expect it and I do. Zsh supports every known feature on the
planet, so why not this? ;-)

The fish globbing code can be run in a 'completion-mode', where instead
of expanding things on wildcards, etc. all possible completions are
added. This way, one can in fish complete strings like 'grep foo
/proc/*/cmd<TAB>' without replacing the * with all found matches, like
in zsh.
Post by Stephane Chazelas
Post by l***@gmail.com
* Correctly complete switches when multiple short switches are grouped
together, like 'tar -zx<TAB>'
What do you mean? That's not the correct tar syntax BTW.
tar -zx

is synonymous with

tar -z -x

One can group single-character switches on a single hyphen. Fish
supports this in it's completion code.
Post by Stephane Chazelas
Post by l***@gmail.com
* Correctly support gnu-style switches that accept arguments but don't
require them, like ls --color. It always completes with '--color=',
even though '--color' would also be a correct completion
zsh supports them
ls --co<tab><space>
or
ls --co<tab><enter>
zsh will remove the "="
Ok. Nice.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
What would be nice would be integration with the _screen_ clipboard -
send \e]83;paste .\a to paste, and \e]83;readbuf [tmpfile]\a to copy.
you could use the screen-exchange file, too.
That sounds very useful. Is there some place I can read up on how the
screen clipboard works?
info screen
I was kind of hoping there'd be some form of programmable inteface for
this. I guess I'll have some work before me, then...
Post by Stephane Chazelas
All that is easily done with zsh (see
http://stchaz.free.fr/mouse.zsh for the X clipboard support)
Yes, I used your very nice X code as the base for my implementation.
This is mentioned in the changelogs. I only wish that your code would
be included in zsh and enabled by default. The huge amount of tweaking
one has to do to get zsh into a usable state is my main gripe with that
shell.
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
* Simple integrated history search - type in a search string into the
prompt and press the up arrow to search for the specified string in the
history. Use Meta-up to search for a token and not a whole line.
see also the "predict" mode in zsh.
The difference here is the common interface between the two. Both are
history searches, so it makes sense to me that the interface should be
the same.
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Jordan Abel
that sounds like vim's way - anything wrong with the i-search feature in
other shells?
Yes.
* If you've typed something and realise that you may have already
written that before, you can search on what you've already typed
<Esc-p> in zsh?
Doesn't work on my machine. Let me guess: I need to manually load
something first? :-(
Post by Stephane Chazelas
Post by l***@gmail.com
* Fish combines the history moving with the history search, so there
are fewer things to learn
* Fish highlights the search match, making it easier to tell if you've
found the right command
That's neat.
Thanks.
Post by Stephane Chazelas
--
Stephane
Stephane Chazelas
2006-03-08 13:36:07 UTC
Permalink
On 8 Mar 2006 04:17:47 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Probably. Fish has command specific completions for ~150 commands. I
tried out a few commands and noticed that zsh lacks completions for
darcs, su, yum and most subcommands of apt, while fish lacks
completions for telnet, rcp and finger. It would seem to me that zsh
has completions for many old-school commands that fish lacks, while
fish has completion support for a few newer commands the zsh lacks. It
is possible that there is a new zsh release featuring support for many
more commands, I'm using 4.2.1.
$ print ${(k)#_comps:#-*-*}
729

(plus all the variants on different Unix systems, and the
completions of other things than command arguments)

in 4.2.4, I doubt it would be that lower in 4.2.1
Post by l***@gmail.com
Post by Stephane Chazelas
One of the very few critics of zsh I've heard of is that when
you use all of its features, it can get very bloated and
resource consuming. If you hold a cache of all the whatis or
user database in memory, you'll run into similar problems.
Fish autoloads everything on first use. The first time you use a
shellscript function, that function is loaded.
same for zsh.
Post by l***@gmail.com
The first time you
complete a command, the completions for that command are loaded. The
first time you access the commandline history, the history file is
loaded. These incremental loads are so small that they aren't noticable
on my 300MHz Speed deamon unless the disk is spinned down. There is an
issue with memory use, however, since nothing ever gets unloaded. This
issue is amplified by the fact that fish internally uses wide character
strings, so most text is quadrupled in size. I just used massigf to
tell me the fish memory usage, and it claims fish uses ~100 kB on
startup and ~1.6 MB if you manually load _everything_. I think massif
is doing something wrng however, the memory usage profile looks funny.
My guess would be that fish uses ~150 kB on startup. I find that my
fish sessions usually pan out at 300-400 kB of memory, which seems to
be what zsh uses on startup.
The whatis database on this system is:

$ cat ${(u)^manpath}/windex(.N) | wc -c
937419

characters long

on this other one:

$ wc -c < /var/cache/man/whatis
1711935

Would that mean that fish will use 4 or 7 MB of memory only for
holding the man completion?

Is there a way to disable that feature.

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
You can't expect that to work with zsh, the way it is. zsh
completes one thing at a time, {..,..} is not some sort of
globbing, it is expanded very early (before any other things),
it doesn't make much sense to complete there as you can't do
anything reliable.
I can expect it and I do. Zsh supports every known feature on the
planet, so why not this? ;-)
What should it do in

cmd {*/|/usr}<Tab>

?
Post by l***@gmail.com
The fish globbing code can be run in a 'completion-mode', where instead
of expanding things on wildcards, etc. all possible completions are
added. This way, one can in fish complete strings like 'grep foo
/proc/*/cmd<TAB>' without replacing the * with all found matches, like
in zsh.
That's tunable in zsh (see compinstall). Also note the
/u/l/b<Tab> that gets expanded to /usr/local/bin. and the
approximate and case insensitive completions (with corrections).
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
* Correctly complete switches when multiple short switches are grouped
together, like 'tar -zx<TAB>'
What do you mean? That's not the correct tar syntax BTW.
tar -zx
is synonymous with
tar -z -x
One can group single-character switches on a single hyphen. Fish
supports this in it's completion code.
as do zsh.

The correct syntax is tar zx

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
info screen
I was kind of hoping there'd be some form of programmable inteface for
this. I guess I'll have some work before me, then...
You can interact with screen with the \e[83... escape sequence
or the screen command.
Post by l***@gmail.com
Post by Stephane Chazelas
All that is easily done with zsh (see
http://stchaz.free.fr/mouse.zsh for the X clipboard support)
Yes, I used your very nice X code as the base for my implementation.
This is mentioned in the changelogs. I only wish that your code would
be included in zsh and enabled by default. The huge amount of tweaking
one has to do to get zsh into a usable state is my main gripe with that
shell.
What you find usable will not necessarily be by someone else.
You tune it to your taste. Probably a theme scheme could be nice
(as for the prompt themes)

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
<Esc-p> in zsh?
Doesn't work on my machine. Let me guess: I need to manually load
something first? :-(
No, you need to be in emacs mode.

It finds the last command line with the same first word as the
one on the current line.

cvs<Alt-P>

will bring the last cvs command you ran.
--
Stephane
l***@gmail.com
2006-03-08 15:34:15 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Probably. Fish has command specific completions for ~150 commands. I
tried out a few commands and noticed that zsh lacks completions for
darcs, su, yum and most subcommands of apt, while fish lacks
completions for telnet, rcp and finger. It would seem to me that zsh
has completions for many old-school commands that fish lacks, while
fish has completion support for a few newer commands the zsh lacks. It
is possible that there is a new zsh release featuring support for many
more commands, I'm using 4.2.1.
$ print ${(k)#_comps:#-*-*}
729
(plus all the variants on different Unix systems, and the
completions of other things than command arguments)
701 on my system. The fish number does not count completions for .e.g
all the /etc/init.d scripts, builtins with very few completions and
non-command completions either.
Post by Stephane Chazelas
in 4.2.4, I doubt it would be that lower in 4.2.1
Post by l***@gmail.com
Post by Stephane Chazelas
One of the very few critics of zsh I've heard of is that when
you use all of its features, it can get very bloated and
resource consuming. If you hold a cache of all the whatis or
user database in memory, you'll run into similar problems.
Fish autoloads everything on first use. The first time you use a
shellscript function, that function is loaded.
same for zsh.
Why then does zsh use over 300 kB on startup with a very short history
file and no other configurations than compinit? This is the amount of
allocated memory as reported by massif, by the way, it only includes
stack+memory allocations.
Post by Stephane Chazelas
Post by l***@gmail.com
The first time you
complete a command, the completions for that command are loaded. The
first time you access the commandline history, the history file is
loaded. These incremental loads are so small that they aren't noticable
on my 300MHz Speed deamon unless the disk is spinned down. There is an
issue with memory use, however, since nothing ever gets unloaded. This
issue is amplified by the fact that fish internally uses wide character
strings, so most text is quadrupled in size. I just used massigf to
tell me the fish memory usage, and it claims fish uses ~100 kB on
startup and ~1.6 MB if you manually load _everything_. I think massif
is doing something wrng however, the memory usage profile looks funny.
My guess would be that fish uses ~150 kB on startup. I find that my
fish sessions usually pan out at 300-400 kB of memory, which seems to
be what zsh uses on startup.
$ cat ${(u)^manpath}/windex(.N) | wc -c
937419
characters long
$ wc -c < /var/cache/man/whatis
1711935
Would that mean that fish will use 4 or 7 MB of memory only for
holding the man completion?
Is there a way to disable that feature.
No need. Fish does not store the database in memory, it uses apropos
and grep to find the correct descriptions. Actually, reading the
database to memory wouldn't work very well, for example Debian and
Fedora seem to use different whatis formats.

If you simply hate the idea of having such help, even though it doesn't
cost you any memory, you can overload the relevant functions and
completions as I described in my original post. Any completions found
in ~/.fish.d/completions/ or /etc/fish.d/completions/ override the
default fish completions, located in /usr/local/share/fish/completions.
Same thing with functions. You can of course change these search paths,
fish simply uses the arrays $fish_complete_path and
$fish_function_path.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
You can't expect that to work with zsh, the way it is. zsh
completes one thing at a time, {..,..} is not some sort of
globbing, it is expanded very early (before any other things),
it doesn't make much sense to complete there as you can't do
anything reliable.
I can expect it and I do. Zsh supports every known feature on the
planet, so why not this? ;-)
What should it do in
cmd {*/|/usr}<Tab>
I guess you meant ',', not '|', since the latter is a syntax error. I'
m also guessing that you meant '/usr/', since '/usr' has only one
completionnamely adding a '/'. here is the output of 'echo {*/,/usr/}'
when used from a directory containing one subdirectory called 'foo'
with two files 'bar' and 'baz' in them:

...,/usr/}bar.txt (Plain text document, empty)
...,/usr/}baz.jpg (JPEG image, empty)
...,/usr/}bin/ (Directory)
...,/usr/}etc/ (Directory)
...,/usr/}games/ (Directory)
...,/usr/}include/ (Directory)
...,/usr/}java/ (Directory)
...,/usr/}kerberos/ (Directory)
...,/usr/}lib/ (Directory)
...,/usr/}libexec/ (Directory)
...,/usr/}local/ (Directory)
...,/usr/}man/ (Directory)
...,/usr/}NX/ (Directory)
...,/usr/}sbin/ (Directory)
...,/usr/}share/ (Directory)
...,/usr/}src/ (Directory)
...,/usr/}tmp/ (Symbolic link)
...,/usr/}X11R6/ (Directory)

The '...' is the unicode ellipsis character, so only one colum is used
to show that the prefix is abbrevated. In non-unicode locales the '$'
symbol is used instead.
Post by Stephane Chazelas
?
Post by l***@gmail.com
The fish globbing code can be run in a 'completion-mode', where instead
of expanding things on wildcards, etc. all possible completions are
added. This way, one can in fish complete strings like 'grep foo
/proc/*/cmd<TAB>' without replacing the * with all found matches, like
in zsh.
That's tunable in zsh (see compinstall). Also note the
/u/l/b<Tab> that gets expanded to /usr/local/bin. and the
approximate and case insensitive completions (with corrections).
Cool features. I'll steal them. :-D
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
* Correctly complete switches when multiple short switches are grouped
together, like 'tar -zx<TAB>'
What do you mean? That's not the correct tar syntax BTW.
tar -zx
is synonymous with
tar -z -x
One can group single-character switches on a single hyphen. Fish
supports this in it's completion code.
as do zsh.
Not according to the GNU manual, both syntaxes are ok. Solaris also
supports both.

But I can see that you are right that zsh supports short switch
grouping, it simply didn't support hyphnated switches to tar. Fair
enough, fish doesn't support switches without a hyphen in tar.
Post by Stephane Chazelas
The correct syntax is tar zx
Like I said, both are supported these days, though I'm sure 'tar xf' is
the historically correct one if you say so.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
info screen
I was kind of hoping there'd be some form of programmable inteface for
this. I guess I'll have some work before me, then...
You can interact with screen with the \e[83... escape sequence
or the screen command.
Ok. I'll send any questions I encounder to this list.
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
All that is easily done with zsh (see
http://stchaz.free.fr/mouse.zsh for the X clipboard support)
Yes, I used your very nice X code as the base for my implementation.
This is mentioned in the changelogs. I only wish that your code would
be included in zsh and enabled by default. The huge amount of tweaking
one has to do to get zsh into a usable state is my main gripe with that
shell.
What you find usable will not necessarily be by someone else.
You tune it to your taste. Probably a theme scheme could be nice
(as for the prompt themes)
People have different tastes, but a well designed feature helps people
who like it without getting in the way of people who don't, meaning you
don't have to turn it off. Also, turning everything off by default must
be the _worst_ default possible, since people who don't spend hours
tweaking the shell (i.e. 99% of all people who use shells) will never
discover all the cool things that exist.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
<Esc-p> in zsh?
Doesn't work on my machine. Let me guess: I need to manually load
something first? :-(
No, you need to be in emacs mode.
Ah, ok. Must have turned on vi-mode by mistake.
Post by Stephane Chazelas
It finds the last command line with the same first word as the
one on the current line.
cvs<Alt-P>
will bring the last cvs command you ran.
Cool. I'll try it out later.
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-08 15:50:37 UTC
Permalink
On 8 Mar 2006 07:34:15 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
same for zsh.
Why then does zsh use over 300 kB on startup with a very short history
file and no other configurations than compinit? This is the amount of
allocated memory as reported by massif, by the way, it only includes
stack+memory allocations.
All the initialisations by zsh and the libs, probably, the
environment variables, the keybinding tables... You can trace
the mallocs if you want.

Note that most of the code is in modules (for instance the line
editor is only loaded for interactive shells).

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Would that mean that fish will use 4 or 7 MB of memory only for
holding the man completion?
Is there a way to disable that feature.
No need. Fish does not store the database in memory, it uses apropos
and grep to find the correct descriptions. Actually, reading the
database to memory wouldn't work very well, for example Debian and
Fedora seem to use different whatis formats.
[...]

most zsh completions use a cache. Without that, it would be
unusable. On many systems (especially when NFS is involved),
without that it would be unusable (too slow). It's very annoying
when completion takes ages. The first versions of man completion
some time ago (in tcsh or zsh) where often unusable because of
that.
--
Stephane
l***@gmail.com
2006-03-08 16:14:09 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
same for zsh.
Why then does zsh use over 300 kB on startup with a very short history
file and no other configurations than compinit? This is the amount of
allocated memory as reported by massif, by the way, it only includes
stack+memory allocations.
All the initialisations by zsh and the libs, probably, the
environment variables, the keybinding tables... You can trace
the mallocs if you want.
Ok, it seems that the heap overhead is nearly 100kB, the stack tackes
another 50, and that the functions ztrdup, createparam and hashdir are
the biggest consumers otherwise. The huge amount of heap overhead
implies that zsh permanently uses the memory from well over 10000
malloc calls 8 bytes of memory overhead per allocation on x86 using
Glibc, if my memory serves me correctly). That's a lot of allocations.
Seems zsh does do a fair amount of loading on startup to me.

I'm guessing hashdir creates the PATH cache. ztrdup is obviously a pun
on strdup, e.g. it's used to create copies of new strings. Createparam
is a bit too generic for me to guess.

One should not forget though that fish is not actually any faster at
starting up than zsh, both take ~0.6 seconds on my machine. The reason
fish is so slow is because it does not use a builtin for echo, printf
and other common commands.
Post by Stephane Chazelas
Note that most of the code is in modules (for instance the line
editor is only loaded for interactive shells).
The fish binary contains all objects of fish, but e.g. the completion
engine and the interactive editor are only initialized when they are
first used, so they don't get loaded in non-interactive mode.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Would that mean that fish will use 4 or 7 MB of memory only for
holding the man completion?
Is there a way to disable that feature.
No need. Fish does not store the database in memory, it uses apropos
and grep to find the correct descriptions. Actually, reading the
database to memory wouldn't work very well, for example Debian and
Fedora seem to use different whatis formats.
[...]
most zsh completions use a cache. Without that, it would be
unusable. On many systems (especially when NFS is involved),
without that it would be unusable (too slow). It's very annoying
when completion takes ages. The first versions of man completion
some time ago (in tcsh or zsh) where often unusable because of
that.
Seems not to be an issue anymore. Even a 300 MHz computer handle those
things just fine if you implement it properly. The important part is to
phrase the search so you only run apropos/grep _once_ for all
completions.

Fish does use caching (on-disk, not in memory) when completing rpm
packages, but only on systems without apt. If apt is compiled to use
the rpm format, it is almost an order of magnitude faster for
completing packages than the rpm command is, and at least fedora seems
to install apt pretty often.

BTW, yum is the standard commandline package installer on fedora, it's
kind of like apt-get.
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-08 17:35:08 UTC
Permalink
On 8 Mar 2006 08:14:09 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Note that most of the code is in modules (for instance the line
editor is only loaded for interactive shells).
The fish binary contains all objects of fish, but e.g. the completion
engine and the interactive editor are only initialized when they are
first used, so they don't get loaded in non-interactive mode.
zsh completion system is implemented with a few binary modules
but mainly with autoloadable (on demand) zsh functions (so will
be slower but will have a smaller memory footage than compiled
code).

[...]
Post by l***@gmail.com
Seems not to be an issue anymore. Even a 300 MHz computer handle those
things just fine if you implement it properly. The important part is to
phrase the search so you only run apropos/grep _once_ for all
completions.
That's generally not an issue with the CPU speed. In many places
where Unix is used, the data is shared and access to it can be
very slow.

[...]
Post by l***@gmail.com
BTW, yum is the standard commandline package installer on fedora, it's
kind of like apt-get.
Thanks. I take it the zsh developpers or users don't use fedora.
Which sort of makes sense to me ;)
--
Stephane
l***@gmail.com
2006-03-09 00:38:56 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Note that most of the code is in modules (for instance the line
editor is only loaded for interactive shells).
The fish binary contains all objects of fish, but e.g. the completion
engine and the interactive editor are only initialized when they are
first used, so they don't get loaded in non-interactive mode.
zsh completion system is implemented with a few binary modules
but mainly with autoloadable (on demand) zsh functions (so will
be slower but will have a smaller memory footage than compiled
code).
That isn't really how modern Unix systems work, though. The part of the
program binary containing actual code is only loaded once for all
running program instances, so running 1000 zsh instances won't use more
memory for the binary than running one instance. Also, only the pages
wich actually contain code that are used will ever get paged in. The
latter means that you have a 4 kB granularity on what gets loaded. A
quick check of the object files in zsh tell me that they seem they
often have a size between 10 and 50 kB. So at least a large part of the
program will never leave the disk for parts that aren't used, but the
parts that are loaded into memory are only loaded once. In other words,
all semi-modern OSes do this optmization for you.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Seems not to be an issue anymore. Even a 300 MHz computer handle those
things just fine if you implement it properly. The important part is to
phrase the search so you only run apropos/grep _once_ for all
completions.
That's generally not an issue with the CPU speed. In many places
where Unix is used, the data is shared and access to it can be
very slow.
Good point. It might be a problem over nfs. I've never used fish on
anythingg with slwoer IO than either 5400 RPM PATA or AFS network
filesystems under heavy load, both of which run perfectly. But NFS3
with conservative caching and a server under load would probably not
work very well.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
BTW, yum is the standard commandline package installer on fedora, it's
kind of like apt-get.
Thanks. I take it the zsh developpers or users don't use fedora.
Which sort of makes sense to me ;)
What is the zsh developers distro of choice, then? Xandros? Linspire?
;-)
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-08 15:20:07 UTC
Permalink
On 8 Mar 2006 04:17:47 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Probably. Fish has command specific completions for ~150 commands. I
tried out a few commands and noticed that zsh lacks completions for
darcs, su, yum and most subcommands of apt, while fish lacks
completions for telnet, rcp and finger. It would seem to me that zsh
has completions for many old-school commands that fish lacks, while
fish has completion support for a few newer commands the zsh lacks. It
is possible that there is a new zsh release featuring support for many
more commands, I'm using 4.2.1.
darcs support was added on 2004-09-24, su on 1999-07-09,
what are the missing apt sub commands you are thinking of? I
don't know what yum is. Generally when someone thinks of a
command that he would like to have a completion for, he writes
it and submits it to the zsh-workers mailing list, or he
requests it on that list.

4.2.1 was released on 2004-08-13, the latest production release
is 4.2.6, the latest development release is 4.3.2.

By "old-school", do you mean non-Linux?
--
Stephane
l***@gmail.com
2006-03-08 15:52:17 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Probably. Fish has command specific completions for ~150 commands. I
tried out a few commands and noticed that zsh lacks completions for
darcs, su, yum and most subcommands of apt, while fish lacks
completions for telnet, rcp and finger. It would seem to me that zsh
has completions for many old-school commands that fish lacks, while
fish has completion support for a few newer commands the zsh lacks. It
is possible that there is a new zsh release featuring support for many
more commands, I'm using 4.2.1.
darcs support was added on 2004-09-24, su on 1999-07-09,
what are the missing apt sub commands you are thinking of? I
don't know what yum is. Generally when someone thinks of a
command that he would like to have a completion for, he writes
it and submits it to the zsh-workers mailing list, or he
requests it on that list.
4.2.1 was released on 2004-08-13, the latest production release
is 4.2.6, the latest development release is 4.3.2.
zsh su support doesn't include switch completion, only username
completion. 'su -<TAB>' gives you nothing. Fish has both, and gives you
the full username as a description when completing using username.
Missing apt commands include apt-build, apt-move, apt-spy. There's a
lot of apt-commands.

Also, I noticed that the number of completions given by thr $_comps
variable is pretty inflated, e.g. bzip is listed 7 times, apt 4 times
and so on.

print ${_comps:#-*-*}|tr " " "\n"|sort|uniq|wc -l

gives me 298 completions, which is still more than fish, but still in
the same area.
Post by Stephane Chazelas
By "old-school", do you mean non-Linux?
I meant like how telnet has mostly been replaced by ssh, how rcp has
mostly been replaced by scp, etc..
Post by Stephane Chazelas
--
Stephane
Stephane Chazelas
2006-03-08 16:36:04 UTC
Permalink
On 8 Mar 2006 07:52:17 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
print ${_comps:#-*-*}|tr " " "\n"|sort|uniq|wc -l
gives me 298 completions, which is still more than fish, but still in
the same area.
$_comps is an associative array.

expanded like that it gives you the values. What you want is the
keys:

Hence the (k) in my post

$ print -rl ${(k)_comps} | grep -v '^-.*-' | sort -u | wc -l
765

Or

print -r ${#${(u)${(k)_comps#-*-*}}}
Post by l***@gmail.com
Post by Stephane Chazelas
By "old-school", do you mean non-Linux?
I meant like how telnet has mostly been replaced by ssh, how rcp has
mostly been replaced by scp, etc..
[...]


???

They serve different purposes. I never use ssh except for secure
transfer of confidential data over untrusted networks. In other
circumstances, it's just waste of resource and bandwidth.

More over, some telnet implementation support encryptions so are
just as good as ssh. AFAICS, ssh is very rarely installed
on Unix systems.
--
Stephane
l***@gmail.com
2006-03-09 00:26:27 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
print ${_comps:#-*-*}|tr " " "\n"|sort|uniq|wc -l
gives me 298 completions, which is still more than fish, but still in
the same area.
$_comps is an associative array.
expanded like that it gives you the values. What you want is the
Hence the (k) in my post
$ print -rl ${(k)_comps} | grep -v '^-.*-' | sort -u | wc -l
765
Or
print -r ${#${(u)${(k)_comps#-*-*}}}
Ok. On the other hand, I'm guessing that when multiple commands use the
same compeltion function, this generally means that these commands do
not have compeltions for switches, only e.g. username completion.
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
By "old-school", do you mean non-Linux?
I meant like how telnet has mostly been replaced by ssh, how rcp has
mostly been replaced by scp, etc..
[...]
???
They serve different purposes. I never use ssh except for secure
transfer of confidential data over untrusted networks. In other
circumstances, it's just waste of resource and bandwidth.
* ssh compresses IO, meaning that sometimes you get better performance.

* With ssh you can use public key authentication so that you don't have
to use passwords.
* How well can you really trust a trusted network? If one machine is
compromised, it can often be used to sniff passwords for all other
machines.

Telnet still has a few uses, like talking directly to a mailserver,
though.
Post by Stephane Chazelas
More over, some telnet implementation support encryptions so are
just as good as ssh. AFAICS, ssh is very rarely installed
on Unix systems.
Sure, Kerberos Telnet works. Is there a telnet version that implements
encrypted port forwarding, though?
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-09 09:24:46 UTC
Permalink
On 8 Mar 2006 16:26:27 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
print -r ${#${(u)${(k)_comps#-*-*}}}
Ok. On the other hand, I'm guessing that when multiple commands use the
same compeltion function, this generally means that these commands do
not have compeltions for switches, only e.g. username completion.
Not necessarily, see

print -f '%s => %s\n' ${(kv)_comps} | sort -k 3 | uniq -D -f2

(GNU uniq)


[...]
Post by l***@gmail.com
Telnet still has a few uses, like talking directly to a mailserver,
though.
Though that's a bit overkill given that a mailserver doesn't
implement the telnet protocol but there's no standard
alternative, though (note that zsh has a TCP socket module ;),
and that's why it's a common usage of the command.
Post by l***@gmail.com
Post by Stephane Chazelas
More over, some telnet implementation support encryptions so are
just as good as ssh. AFAICS, ssh is very rarely installed
on Unix systems.
Sure, Kerberos Telnet works. Is there a telnet version that implements
encrypted port forwarding, though?
[...]

That's true. That's not what telnet is meant to do. I agree ssh
is a very useful command, but I wouldn't say telnet is
deprecated, nor that there's always a benefit to replace telnet
or rlogin/rcp with ssh.
--
Stephane
Stephane Chazelas
2006-03-08 17:42:11 UTC
Permalink
On 8 Mar 2006 07:52:17 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
4.2.1 was released on 2004-08-13, the latest production release
is 4.2.6, the latest development release is 4.3.2.
zsh su support doesn't include switch completion, only username
completion. 'su -<TAB>' gives you nothing.
su generally doesn't take options. It's a non-standard utility,
and I know of at least 3 different implementations only for
Linux. Some might take options, some not, zsh approach is as
good as any. I guess it could try and parse the man page but
that may be a bit overkill (and slow).
--
Stephane
Kurt Swanson
2006-03-08 18:38:20 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
4.2.1 was released on 2004-08-13, the latest production release
is 4.2.6, the latest development release is 4.3.2.
zsh su support doesn't include switch completion, only username
completion. 'su -<TAB>' gives you nothing.
su generally doesn't take options. It's a non-standard utility,
and I know of at least 3 different implementations only for
Linux. Some might take options, some not, zsh approach is as
good as any. I guess it could try and parse the man page but
that may be a bit overkill (and slow).
Maybe a better approach would be for commands to declare their options
in a proper format, something like zsh's _gnu_generic, which would
allow commands to declare not only options, but arguments to the
options as well as parameters to the command. Of course getting all
the writers of all the commands to implement this would be infeasible.

Incidentally, "compdef _gnu_generic su" on my machine does at least
allow me to complete all options (but not parameters or arguments to
options, nor does it complete abbreviated options, but still!)...
--
© 2006 Kurt Swanson AB
l***@gmail.com
2006-03-09 01:06:34 UTC
Permalink
Post by Kurt Swanson
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
4.2.1 was released on 2004-08-13, the latest production release
is 4.2.6, the latest development release is 4.3.2.
zsh su support doesn't include switch completion, only username
completion. 'su -<TAB>' gives you nothing.
su generally doesn't take options. It's a non-standard utility,
and I know of at least 3 different implementations only for
Linux. Some might take options, some not, zsh approach is as
good as any. I guess it could try and parse the man page but
that may be a bit overkill (and slow).
Maybe a better approach would be for commands to declare their options
in a proper format, something like zsh's _gnu_generic, which would
allow commands to declare not only options, but arguments to the
options as well as parameters to the command. Of course getting all
the writers of all the commands to implement this would be infeasible.
Incidentally, "compdef _gnu_generic su" on my machine does at least
allow me to complete all options (but not parameters or arguments to
options, nor does it complete abbreviated options, but still!)...
It would be great. And something that _all_ shells could use. I've been
thinking about contacting the glibc developers and proposing an
extended getopt_long function, getopt_long_completable or something
like that, which wouls support defining completions in the program.
Don't know what their feelings toward this would be, but it would be a
nice first step towards better tab completions all around.
Post by Kurt Swanson
--
© 2006 Kurt Swanson AB
--
Axel
Kurt Swanson
2006-03-09 01:22:53 UTC
Permalink
Post by l***@gmail.com
Post by Kurt Swanson
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
4.2.1 was released on 2004-08-13, the latest production release
is 4.2.6, the latest development release is 4.3.2.
zsh su support doesn't include switch completion, only username
completion. 'su -<TAB>' gives you nothing.
su generally doesn't take options. It's a non-standard utility,
and I know of at least 3 different implementations only for
Linux. Some might take options, some not, zsh approach is as
good as any. I guess it could try and parse the man page but
that may be a bit overkill (and slow).
Maybe a better approach would be for commands to declare their options
in a proper format, something like zsh's _gnu_generic, which would
allow commands to declare not only options, but arguments to the
options as well as parameters to the command. Of course getting all
the writers of all the commands to implement this would be infeasible.
It would be great. And something that _all_ shells could use. I've been
thinking about contacting the glibc developers and proposing an
extended getopt_long function, getopt_long_completable or something
like that, which wouls support defining completions in the program.
Don't know what their feelings toward this would be, but it would be a
nice first step towards better tab completions all around.
I would certainly support it. The really nice thing would be the
dynamic nature of it all--shells would merely need to parse/complete
on this output without respect to the platform, state, etc. I.e. no
"if linux do this else if..."

A proper declaration of one's options and arguments would be akin to a
finite state machine. This would still never cover all the quirkiness
of some commands' parameter-options, but would go a long way...
--
© 2006 Kurt Swanson AB
l***@gmail.com
2006-03-09 10:04:09 UTC
Permalink
Post by Kurt Swanson
Post by l***@gmail.com
Post by Kurt Swanson
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
4.2.1 was released on 2004-08-13, the latest production release
is 4.2.6, the latest development release is 4.3.2.
zsh su support doesn't include switch completion, only username
completion. 'su -<TAB>' gives you nothing.
su generally doesn't take options. It's a non-standard utility,
and I know of at least 3 different implementations only for
Linux. Some might take options, some not, zsh approach is as
good as any. I guess it could try and parse the man page but
that may be a bit overkill (and slow).
Maybe a better approach would be for commands to declare their options
in a proper format, something like zsh's _gnu_generic, which would
allow commands to declare not only options, but arguments to the
options as well as parameters to the command. Of course getting all
the writers of all the commands to implement this would be infeasible.
It would be great. And something that _all_ shells could use. I've been
thinking about contacting the glibc developers and proposing an
extended getopt_long function, getopt_long_completable or something
like that, which wouls support defining completions in the program.
Don't know what their feelings toward this would be, but it would be a
nice first step towards better tab completions all around.
I would certainly support it. The really nice thing would be the
dynamic nature of it all--shells would merely need to parse/complete
on this output without respect to the platform, state, etc. I.e. no
"if linux do this else if..."
A proper declaration of one's options and arguments would be akin to a
finite state machine. This would still never cover all the quirkiness
of some commands' parameter-options, but would go a long way...
here's what I had in mind. Add two fields to the getopt_long option
struct, namely 'description' containing a description of the
completion, and completion_flags, which is a union of the different
types of completions that the argument should support, e.g. filename
completion, netwrok interface completion, etc. Also, option entrys with
a completion name of 0 is allowed in two cases:

* To specify completions for arguments that are not arguments to a
specific switch, in which case val and flag must also be 0.
* To specify completions for short switches that have no equvalent long
switches, in which case flag must be 0 and val must be the name of the
short switch.

#define COMPLETE_FILE 1
#define COMPLETE_USER 2
#define COMPLETE_DIRECTORY 4
#define COMPLETE_HOST 8
#define COMPLETE_PACKAGE 16
#define COMPLETE_NETWORK_INTERFACE 32
#define COMPLETE_COMMAND 64

struct option_comp
{
const char *name;
int has_arg;
int *flag;
int val;
const char *description;
int complete_flag;
};

extern int getopt_long_comp (int argc, char *const *argv, const char
*shortopts,
const struct option_comp *longopts, int *longind);


An example of this interface in use, parsing the arguments for su:

static const option_comp *long_opt =
{
{"login", no_argument, 0, 'l', "Make the shell a login shell", 0},
{"command", required_argument, 0, 'c', "Command to execute",
COMPLETE_COMMAND},
{"fast", no_argument, 0, 'f', "Pass f to the shell", 0},
{"preserve-environment", no_argument, 0, 'm', "Do not reset
environment variables", 0},
{0, no_argument, 0, 'p', "Do not reset environment variables", 0},
{"shell", required_argument, 0, 's', "Run specified shell if
/etc/shells allows it", COMPLETE_COMMAND},
{"help", no_argument, 0, '', "Display help and exit", 0},
{"version", no_argument, 0, '', "Display verion and exit", 0}
{0, required_argument, 0, 0, "Specify new user", COMPLETE_USER},

};

while(1)
{
int opt_intex=0;
int opt = getopt_long_comp( argc, argv, "lc:fmps:", long_opt,
&opt_index );

switch( opt )
{
case 'l':
....
}
}

In order to get this information out, one would simply use 'su
--print-completions', in which case the above could get printed to
stdout in a format that is easy to parse for both humans and machines,
perhaps as a tab separated list.
Post by Kurt Swanson
--
© 2006 Kurt Swanson AB
--
Axel
Kurt Swanson
2006-03-10 20:11:26 UTC
Permalink
Post by l***@gmail.com
here's what I had in mind. Add two fields to the getopt_long option
struct, namely 'description' containing a description of the
completion, and completion_flags, which is a union of the different
types of completions that the argument should support, e.g. filename
completion, netwrok interface completion, etc. Also, option entrys with
* To specify completions for arguments that are not arguments to a
specific switch, in which case val and flag must also be 0.
* To specify completions for short switches that have no equvalent long
switches, in which case flag must be 0 and val must be the name of the
short switch.
#define COMPLETE_FILE 1
#define COMPLETE_USER 2
#define COMPLETE_DIRECTORY 4
#define COMPLETE_HOST 8
#define COMPLETE_PACKAGE 16
#define COMPLETE_NETWORK_INTERFACE 32
#define COMPLETE_COMMAND 64
struct option_comp
{
const char *name;
int has_arg;
int *flag;
int val;
const char *description;
int complete_flag;
};
extern int getopt_long_comp (int argc, char *const *argv, const char
*shortopts,
const struct option_comp *longopts, int *longind);
static const option_comp *long_opt =
{
{"login", no_argument, 0, 'l', "Make the shell a login shell", 0},
{"command", required_argument, 0, 'c', "Command to execute",
COMPLETE_COMMAND},
{"fast", no_argument, 0, 'f', "Pass f to the shell", 0},
{"preserve-environment", no_argument, 0, 'm', "Do not reset
environment variables", 0},
{0, no_argument, 0, 'p', "Do not reset environment variables", 0},
{"shell", required_argument, 0, 's', "Run specified shell if
/etc/shells allows it", COMPLETE_COMMAND},
{"help", no_argument, 0, '', "Display help and exit", 0},
{"version", no_argument, 0, '', "Display verion and exit", 0}
{0, required_argument, 0, 0, "Specify new user", COMPLETE_USER},
};
while(1)
{
int opt_intex=0;
int opt = getopt_long_comp( argc, argv, "lc:fmps:", long_opt,
&opt_index );
switch( opt )
{
....
}
}
In order to get this information out, one would simply use 'su
--print-completions', in which case the above could get printed to
stdout in a format that is easy to parse for both humans and machines,
perhaps as a tab separated list.
This is an interesting start, but it lacks extensibility and state.
In terms of the first, hard-coding parameters to options as powers of
two codes will never cover all the possibilities. As to the second,
consider the following example:

% rpm -q «tab»
should match installed packages, but
% rpm -qp «tab»
should match files matching '*.rpm'

These are things existing completion systems can do well, such as
zsh's, even if they have to be hand-coded...
--
© 2006 Kurt Swanson AB
Jordan Abel
2006-03-08 18:59:46 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
4.2.1 was released on 2004-08-13, the latest production release
is 4.2.6, the latest development release is 4.3.2.
zsh su support doesn't include switch completion, only username
completion. 'su -<TAB>' gives you nothing.
su generally doesn't take options. It's a non-standard utility,
Being able to use -c is fairly standard, though on some systems you need
to specify a username first and the -c is interpreted by the shell

for example, su root -c ls
Post by Stephane Chazelas
and I know of at least 3 different implementations only for
Linux. Some might take options, some not, zsh approach is as
good as any. I guess it could try and parse the man page but
that may be a bit overkill (and slow).
Jordan Abel
2006-03-08 18:25:12 UTC
Permalink
Post by l***@gmail.com
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
zsh has dynamic scope. lexical scope can be overly restrictive.
But if you simply create a new variable, it will be global by default,
which is the pert the has me worried.
Same in awk, perl and many languages. It looks more intuitive
(familiar) to me to have to tell it when I want to limit the
scope.
I disagree. Most variables that you use inside functions should be
function local in my experience. And most of those that shouldn't,
should be exported as well, anyway.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
* Advanced tab completions, including useful descriptions. For example,
type 'man wcs<TAB>' and you'll be presented with a listing of all
manual pages beginning with 'wcs' and a short summary of what each
page contains.
Zsh does this.
At least zsh 4.2.1 doesn't. If you complete 'ls -' it will show you
switches to ls with descriptions, just like fish. But at least my
* Provide the whatis entry as the description for manual page
completions
* Provide the whatis entry as the description for command name
completions
These are nice features. Note that zsh has all the necessary
framework to enable you to do the same without having to modify
zsh's code. I would bet zsh supports many more such features
than fish given that new command supports have been added to it
for about 10 years, though.
Probably. Fish has command specific completions for ~150 commands. I
tried out a few commands and noticed that zsh lacks completions for
darcs, su, yum and most subcommands of apt,
I don't think it has them enabled by default for apt, but when I was
introduced to it I was sent a zshrc that enabled it [among many other
things]. Mine at least completes users for su, and it completes
everything for sudo.
Post by l***@gmail.com
while fish lacks completions for telnet, rcp and finger. It would seem
to me that zsh has completions for many old-school commands that fish
lacks, while fish has completion support for a few newer commands the
zsh lacks. It is possible that there is a new zsh release featuring
support for many more commands, I'm using 4.2.1.
How extensible is your completion system?
Post by l***@gmail.com
Post by Stephane Chazelas
One of the very few critics of zsh I've heard of is that when
you use all of its features, it can get very bloated and
resource consuming. If you hold a cache of all the whatis or
user database in memory, you'll run into similar problems.
Fish autoloads everything on first use. The first time you use a
shellscript function, that function is loaded. The first time you
complete a command, the completions for that command are loaded. The
first time you access the commandline history, the history file is
loaded.
Eh? so how do you WRITE to the history file?
Post by l***@gmail.com
Fish is also a bit slower than other shells on some types of scripts,
since it doesn't implement a huge number of commands as builtins. E.g.
time, pwd, kill, printf and echo are use standard unix commands, not
builtins.
time as a builtin has the advantage of being able to time each member of
a pipeline.

pwd as a builtin can print the logical directory name if you have
configured it to do so. (so can the pwd command on some systems, though;
does your shell set the PWD environment variable?)
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
* Correctly complete switches when multiple short switches are grouped
together, like 'tar -zx<TAB>'
What do you mean? That's not the correct tar syntax BTW.
tar -zx
is synonymous with
tar -z -x
One can group single-character switches on a single hyphen. Fish
supports this in it's completion code.
Yes, but tar doesn't use hyphens for its option strings.

[zsh can do this, though its list of tar flags is sorely lacking.]
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
* Correctly support gnu-style switches that accept arguments but don't
require them, like ls --color. It always completes with '--color=',
even though '--color' would also be a correct completion
zsh supports them
ls --co<tab><space>
or
ls --co<tab><enter>
zsh will remove the "="
Ok. Nice.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
What would be nice would be integration with the _screen_ clipboard -
send \e]83;paste .\a to paste, and \e]83;readbuf [tmpfile]\a to copy.
you could use the screen-exchange file, too.
That sounds very useful. Is there some place I can read up on how the
screen clipboard works?
info screen
I was kind of hoping there'd be some form of programmable inteface for
this. I guess I'll have some work before me, then...
You do it by sending escape sequences, such as the ones I just named, to
execute screen commands. The ability to do so also depends on a feature
that has to be enabled in screen, so read up before you do your testing.
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Jordan Abel
that sounds like vim's way - anything wrong with the i-search feature in
other shells?
Yes.
* If you've typed something and realise that you may have already
written that before, you can search on what you've already typed
<Esc-p> in zsh?
Doesn't work on my machine. Let me guess: I need to manually load
something first? :-(
No idea. works for me. Maybe you have a zshrc that's rebinding it to
something else?
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
* Fish combines the history moving with the history search, so there
are fewer things to learn
* Fish highlights the search match, making it easier to tell if you've
found the right command
That's neat.
Thanks.
Post by Stephane Chazelas
--
Stephane
l***@gmail.com
2006-03-09 01:00:39 UTC
Permalink
Post by Jordan Abel
Post by l***@gmail.com
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
zsh has dynamic scope. lexical scope can be overly restrictive.
But if you simply create a new variable, it will be global by default,
which is the pert the has me worried.
Same in awk, perl and many languages. It looks more intuitive
(familiar) to me to have to tell it when I want to limit the
scope.
I disagree. Most variables that you use inside functions should be
function local in my experience. And most of those that shouldn't,
should be exported as well, anyway.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
* Advanced tab completions, including useful descriptions. For example,
type 'man wcs<TAB>' and you'll be presented with a listing of all
manual pages beginning with 'wcs' and a short summary of what each
page contains.
Zsh does this.
At least zsh 4.2.1 doesn't. If you complete 'ls -' it will show you
switches to ls with descriptions, just like fish. But at least my
* Provide the whatis entry as the description for manual page
completions
* Provide the whatis entry as the description for command name
completions
These are nice features. Note that zsh has all the necessary
framework to enable you to do the same without having to modify
zsh's code. I would bet zsh supports many more such features
than fish given that new command supports have been added to it
for about 10 years, though.
Probably. Fish has command specific completions for ~150 commands. I
tried out a few commands and noticed that zsh lacks completions for
darcs, su, yum and most subcommands of apt,
I don't think it has them enabled by default for apt, but when I was
introduced to it I was sent a zshrc that enabled it [among many other
things]. Mine at least completes users for su, and it completes
everything for sudo.
Even _more_ things that aren't enabled by default. Could you post a
zshrc that will turn everything on?
Post by Jordan Abel
Post by l***@gmail.com
while fish lacks completions for telnet, rcp and finger. It would seem
to me that zsh has completions for many old-school commands that fish
lacks, while fish has completion support for a few newer commands the
zsh lacks. It is possible that there is a new zsh release featuring
support for many more commands, I'm using 4.2.1.
How extensible is your completion system?
The completion system is designed to make it relatively easy to get
started writing completions, but ultimatley you can simply run any
piece of shellscript and it's output will be interpreted as completions
on the format

COMPLETION[<TAB>DESCRIPTION]
COMPLETION[<TAB>DESCRIPTION]
COMPLETION[<TAB>DESCRIPTION]
...

meaning you can do pretty much anything. There are a host of special
switches though, to help you do common things. These include:

* Special support for easily defining single character switches,
gnu-style and old-style long switches, and any arguments that they take
* Descriptions common to multiple switches and arguments to switches
* Switches or arguments that should only be implemented if some
condition is met, i.e. switches to specific subcommands of cvs

For more info, see the documentation at

http://roo.no-ip.org/fish/user_doc/index.html#completion-own
Post by Jordan Abel
Post by l***@gmail.com
Post by Stephane Chazelas
One of the very few critics of zsh I've heard of is that when
you use all of its features, it can get very bloated and
resource consuming. If you hold a cache of all the whatis or
user database in memory, you'll run into similar problems.
Fish autoloads everything on first use. The first time you use a
shellscript function, that function is loaded. The first time you
complete a command, the completions for that command are loaded. The
first time you access the commandline history, the history file is
loaded.
Eh? so how do you WRITE to the history file?
If you have entered any commands into the history file they are of
course written to the file on shutdown?
Post by Jordan Abel
Post by l***@gmail.com
Fish is also a bit slower than other shells on some types of scripts,
since it doesn't implement a huge number of commands as builtins. E.g.
time, pwd, kill, printf and echo are use standard unix commands, not
builtins.
time as a builtin has the advantage of being able to time each member of
a pipeline.
True. You could of corse use

time a|time b|time c

but that would be silly. Not enough of an advantage for me, though. The
inclusion of a huge number of builtins violate the principle of doing
one thing and doing it well in my book.
Post by Jordan Abel
pwd as a builtin can print the logical directory name if you have
configured it to do so. (so can the pwd command on some systems, though;
does your shell set the PWD environment variable?)
Fish sets PWD. Fish des not support logical directory names, and here
is a cut-and-paste from the fish FAQ to explain why:

Q: Why does cd, pwd and other fish commands always resolve symlinked
directories to their canonical path? For example if I have the
directory ~/images which is a symlink to ~/Documents,/Images if I write
'cd doc', my prompt will say ~/D/Images, not ~/images.

A: Because it is impossible to consistently keep symlinked directories
unresolved. It is indeed possible to do this partially, and many other
shells do so. But it was felt there are enough serious corner cases
that this is a bad idea. Most such issues have to do with how '..' is
handled, and are varitations of the following example:

Writing cd images; ls .. given the above directory structure would list
the contents of ~/Documents, not of ~, even though using cd .. changes
the current direcotry to ~, and the prompt, the pwd builtin and many
other directory information sources suggest that the the current
directory is ~/images and it's parent is ~. This issue is not possible
to fix without either making every single command into a builtin,
breaking Unix semantics or implementing kludges in every single
command.

This issue can also be seen when doing IO redirection.

Another related issue is that many programs that operate on recursive
directory trees, like the find command, silently ignore symlinked
directories. For example, find $PWD -name '*.txt' silently fails in
shells that don't resolve symlinked paths.
Post by Jordan Abel
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
* Correctly complete switches when multiple short switches are grouped
together, like 'tar -zx<TAB>'
What do you mean? That's not the correct tar syntax BTW.
tar -zx
is synonymous with
tar -z -x
One can group single-character switches on a single hyphen. Fish
supports this in it's completion code.
Yes, but tar doesn't use hyphens for its option strings.
tar accepts both these days. Has done so for a pretty long time on
Solaris *BSD, and Linux at least. FreeBSD tar has this to say on
non-hyphenated options:

"The first synopsis form shows a ``bundled'' option word. This usage
is provided for compatibility with historical implementations. See
COMPATIBILITY below for details."
Post by Jordan Abel
[zsh can do this, though its list of tar flags is sorely lacking.]
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
* Correctly support gnu-style switches that accept arguments but don't
require them, like ls --color. It always completes with '--color=',
even though '--color' would also be a correct completion
zsh supports them
ls --co<tab><space>
or
ls --co<tab><enter>
zsh will remove the "="
Ok. Nice.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
What would be nice would be integration with the _screen_ clipboard -
send \e]83;paste .\a to paste, and \e]83;readbuf [tmpfile]\a to copy.
you could use the screen-exchange file, too.
That sounds very useful. Is there some place I can read up on how the
screen clipboard works?
info screen
I was kind of hoping there'd be some form of programmable inteface for
this. I guess I'll have some work before me, then...
You do it by sending escape sequences, such as the ones I just named, to
execute screen commands. The ability to do so also depends on a feature
that has to be enabled in screen, so read up before you do your testing.
I'll do that. Thanks!
Post by Jordan Abel
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Jordan Abel
that sounds like vim's way - anything wrong with the i-search feature in
other shells?
Yes.
* If you've typed something and realise that you may have already
written that before, you can search on what you've already typed
<Esc-p> in zsh?
Doesn't work on my machine. Let me guess: I need to manually load
something first? :-(
No idea. works for me. Maybe you have a zshrc that's rebinding it to
something else?
Stephane suggested that I was using vi-mode. I'm not, so I'm kind of
surprised as well. I guess it's a problem on my end, though.
Post by Jordan Abel
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
* Fish combines the history moving with the history search, so there
are fewer things to learn
* Fish highlights the search match, making it easier to tell if you've
found the right command
That's neat.
Thanks.
Post by Stephane Chazelas
--
Stephane
--
Axel
Jordan Abel
2006-03-09 02:48:42 UTC
Permalink
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
zsh has dynamic scope. lexical scope can be overly restrictive.
But if you simply create a new variable, it will be global by default,
which is the pert the has me worried.
Same in awk, perl and many languages. It looks more intuitive
(familiar) to me to have to tell it when I want to limit the
scope.
I disagree. Most variables that you use inside functions should be
function local in my experience. And most of those that shouldn't,
should be exported as well, anyway.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Jordan Abel
Post by l***@gmail.com
* Advanced tab completions, including useful descriptions. For example,
type 'man wcs<TAB>' and you'll be presented with a listing of all
manual pages beginning with 'wcs' and a short summary of what each
page contains.
Zsh does this.
At least zsh 4.2.1 doesn't. If you complete 'ls -' it will show you
switches to ls with descriptions, just like fish. But at least my
* Provide the whatis entry as the description for manual page
completions
* Provide the whatis entry as the description for command name
completions
These are nice features. Note that zsh has all the necessary
framework to enable you to do the same without having to modify
zsh's code. I would bet zsh supports many more such features
than fish given that new command supports have been added to it
for about 10 years, though.
Probably. Fish has command specific completions for ~150 commands. I
tried out a few commands and noticed that zsh lacks completions for
darcs, su, yum and most subcommands of apt,
I don't think it has them enabled by default for apt, but when I was
introduced to it I was sent a zshrc that enabled it [among many other
things]. Mine at least completes users for su, and it completes
everything for sudo.
Even _more_ things that aren't enabled by default. Could you post a
zshrc that will turn everything on?
Actually, I think it was debian's package zshrc that enabled the apt
stuff. apt-get completion is hardly useful to me now.
Chris F.A. Johnson
2006-03-07 22:04:50 UTC
Permalink
...
Post by bsh
Quickly now, which of the following are legal function definitions?
hello () {echo hello }
hello () {echo hello;}
hello () {;echo hello }
hello () {;echo hello;}
hello () { echo hello }
hello () { echo hello;}
hello () { ;echo hello }
hello () { ;echo hello;}
... that is proof of a completely broken syntax.
(Number 4 should also work....)
Not in any shell I've tried it: bash[123], ksh93, ash, pdksh.
--
Chris F.A. Johnson, author | <http://cfaj.freeshell.org>
Shell Scripting Recipes: | My code in this post, if any,
A Problem-Solution Approach | is released under the
2005, Apress | GNU General Public Licence
bsh
2006-03-07 22:50:46 UTC
Permalink
Post by Chris F.A. Johnson
Post by bsh
...
(Number 4 should also work....)
Not in any shell I've tried it: bash[123], ksh93, ash, pdksh.
Perhaps I shoud instead have said, "Number 4 _should_ also work...."
and append a smiley just in case.

It reminds me of the broken parsing for (IIRC):

for var; do; ...; done

Apparently semicolons are not quite the same as newlines.

;)

=Brian
bsh
2006-03-07 22:54:01 UTC
Permalink
Post by Chris F.A. Johnson
Post by bsh
...
(Number 4 should also work....)
Not in any shell I've tried it: bash[123], ksh93, ash, pdksh.
Perhaps I shoud instead have said, "Number 4 _should_ also work...."
and append a smiley just in case.

It reminds me of the broken parsing for (IIRC):

for var; do; ...; done

Apparently semicolons are not quite the same as newlines.

;)

=Brian
Stephane Chazelas
2006-03-08 18:12:22 UTC
Permalink
On 6 Mar 2006 09:49:29 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
There are a great many features in fish that have little to do with
syntax, like syntax highlighting, advanced tab completion, X clipboard
intergration, etc.. But this post is only mean to discuss the design
and implications of the changes made to regular shell syntax in fish.
Specifically, I'd be interested in opinions on security considerations,
regressions and further possible changes to the syntax. To try out
fish, visit http://roo.no-ip.org/fish/ or use the prepackaged version
avaialable for many systems including Debian. Fish is GPL:ed, and it
works on most Linux versions, NetBSD, FreeBSD, OS X, Solaris and
possibly Cygwin.
[...]

You say on your web site:

the string **l will match doc/html and user_doc/html. The **
wildcard will only match directory names, not file names. The
recursive wildcard matching feature has been borrowed from
zsh, but the ability to mix recursive wildcards with other
tokens, such as user**/ is unique to fish.

What do you mean by that?

In zsh, **/ is the same as (*/)# (** alone is not special, # is
an extended globbing operator)

So **/*l in zsh would seem to return what fish's **l returns.

But what would user**/ be?

**/user*/

(user*/)#

or **/user*/**/*/ ?

(Note that ksh93 also borrowed ** from zsh (in yet another way,
and is not enabled by default))
--
Stephane
l***@gmail.com
2006-03-09 01:05:39 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
There are a great many features in fish that have little to do with
syntax, like syntax highlighting, advanced tab completion, X clipboard
intergration, etc.. But this post is only mean to discuss the design
and implications of the changes made to regular shell syntax in fish.
Specifically, I'd be interested in opinions on security considerations,
regressions and further possible changes to the syntax. To try out
fish, visit http://roo.no-ip.org/fish/ or use the prepackaged version
avaialable for many systems including Debian. Fish is GPL:ed, and it
works on most Linux versions, NetBSD, FreeBSD, OS X, Solaris and
possibly Cygwin.
[...]
the string **l will match doc/html and user_doc/html. The **
wildcard will only match directory names, not file names. The
recursive wildcard matching feature has been borrowed from
zsh, but the ability to mix recursive wildcards with other
tokens, such as user**/ is unique to fish.
What do you mean by that?
In zsh, **/ is the same as (*/)# (** alone is not special, # is
an extended globbing operator)
So **/*l in zsh would seem to return what fish's **l returns.
But what would user**/ be?
**/user*/
(user*/)#
or **/user*/**/*/ ?
'user**' means all recursive directory matches where the first
directory name has the prefix 'user'.
Post by Stephane Chazelas
(Note that ksh93 also borrowed ** from zsh (in yet another way,
and is not enabled by default))
I sense a pattern here...
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-09 09:06:52 UTC
Permalink
On 8 Mar 2006 17:05:39 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
'user**' means all recursive directory matches where the first
directory name has the prefix 'user'.
user*/**/*

then.

How would you spell

(user*/)#*

in fish (i.e. all files whose all directory components in their
path have "user" in them).

Probably more useful as:

([a-z]##/)[a-z]##

(can also be written **/*~*[^a-z/]*)

All files with only letters in their path.

Does fish also support ***/ (follow symlinks in the recursion,
though you're limited to ***/ in zsh).
--
Stephane
l***@gmail.com
2006-03-09 11:29:06 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
'user**' means all recursive directory matches where the first
directory name has the prefix 'user'.
user*/**/*
then.
Yup.
Post by Stephane Chazelas
How would you spell
(user*/)#*
in fish (i.e. all files whose all directory components in their
path have "user" in them).
([a-z]##/)[a-z]##
(can also be written **/*~*[^a-z/]*)
All files with only letters in their path.
find . -type f |grep "/[a-zA-Z]*\$"

Which I find more readable than the '##' syntax. Besides, I don't like
the overloading of '#', that character should be reserved for comments,
IMO.
Post by Stephane Chazelas
Does fish also support ***/ (follow symlinks in the recursion,
though you're limited to ***/ in zsh).
Nope. The way I see it, '**' is a useful shorthand for recursive
globbing should be used mostly for simpler matchings, find is more
suitable for complicated expressions. One could say that '*' match any
sequence of characters without a '/', while '**' matches ans sequence
of characters. Pretty intuitive to me.
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-09 12:10:56 UTC
Permalink
On 9 Mar 2006 03:29:06 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
([a-z]##/)[a-z]##
(can also be written **/*~*[^a-z/]*)
All files with only letters in their path.
find . -type f |grep "/[a-zA-Z]*\$"
Which I find more readable than the '##' syntax. Besides, I don't like
the overloading of '#', that character should be reserved for comments,
IMO.
Well, I think it is a better approach than ksh's. ksh also
extended its globbing/patterns so that it has the same
functionality. As "*" was already used by the patterns, there
needed to have something else to implement the RE's "*".

ksh (which bash copied, and zsh emulates in ksh mode) did it as:

*(...)

(i.e. "*" becomes a prefix while it is a postfix in REs).
That means that (...) can't be used for grouping. That's why ksh
has @(...) for grouping.

In zsh, RE's * is #, RE's + is ##, RE's ? is (...|) (or
nothing). That makes zsh patterns to involve fewer keystrokes.
That also means the grouping can be implemented as (...).
Note that zsh implemented globbing attributes (like case
insensitive, approximate matching, activation of backreference
the same way as perl does: (#...) in zsh, (?...) in perl,
because # can't otherwise follow a ( in zsh the same way as ?
can't otherwise follow a ( in perl.

comments are not useful in a shell (at the prompt, where they
are disabled by default), only in a programming language. # and
## are extendedglob so are not enabled by default (extendedglob
has all the operators that can cause some portability problems
(~, ^, #, (#...)).

There are not that many non-alnum characters in ASCII, all are
overloaded plenty of times in zsh syntax.

Also, at the prompt, you don't read what you type, it's
write-only. For scripts, zsh has longer forms that you might
consider more readable (like kshglob). Note that zsh also
supports perl regexp (with a builtin in a loadable module).
Post by l***@gmail.com
Nope. The way I see it, '**' is a useful shorthand for
recursive globbing should be used mostly for simpler
matchings, find is more suitable for complicated expressions.
One could say that '*' match any sequence of characters
without a '/', while '**' matches ans sequence of characters.
Pretty intuitive to me.
[...]

The problem with find is that its output can't be postprocessed
reliably. zsh's globbing qualifiers definitely makes find
awkward and deprecated (though I agree find has its usefull
usages too).

**/*.c(-.mh-2OL[1,10])

selects the ten largest C files modified within the last two
hours, which you can't do with find.
--
Stephane
l***@gmail.com
2006-03-09 13:17:32 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
([a-z]##/)[a-z]##
(can also be written **/*~*[^a-z/]*)
All files with only letters in their path.
find . -type f |grep "/[a-zA-Z]*\$"
Which I find more readable than the '##' syntax. Besides, I don't like
the overloading of '#', that character should be reserved for comments,
IMO.
Well, I think it is a better approach than ksh's. ksh also
extended its globbing/patterns so that it has the same
functionality. As "*" was already used by the patterns, there
needed to have something else to implement the RE's "*".
*(...)
(i.e. "*" becomes a prefix while it is a postfix in REs).
That means that (...) can't be used for grouping. That's why ksh
I guess that is worse.
Post by Stephane Chazelas
In zsh, RE's * is #, RE's + is ##, RE's ? is (...|) (or
nothing). That makes zsh patterns to involve fewer keystrokes.
That also means the grouping can be implemented as (...).
Note that zsh implemented globbing attributes (like case
insensitive, approximate matching, activation of backreference
the same way as perl does: (#...) in zsh, (?...) in perl,
because # can't otherwise follow a ( in zsh the same way as ?
can't otherwise follow a ( in perl.
Isn't this just a new syntax for regular expressions? Why not simply
use grep to provide you with real regexps? The unix philosphy of using
a set of orthogonal commands that do one thing each, and connecting
them using pipes is very powerful. Globbing is a _very_ useful hack
that breaks this philsophy by allowing you to perform some types of
very _simple_ matchings inside the shell. The above seems to make
globbing into this huge, hairy beast that can be used for almost
anything find, grep, head, tail, ls and sort do, with a syntax that is
as readable as Perl. What is the point of stuffing all this stuff into
the shell that can be implemented in a more readable and even more
poerful way using external commands?
Post by Stephane Chazelas
comments are not useful in a shell (at the prompt, where they
are disabled by default), only in a programming language. # and
## are extendedglob so are not enabled by default (extendedglob
has all the operators that can cause some portability problems
(~, ^, #, (#...)).
There are not that many non-alnum characters in ASCII, all are
overloaded plenty of times in zsh syntax.
Fair enough.
Post by Stephane Chazelas
Also, at the prompt, you don't read what you type, it's
write-only. For scripts, zsh has longer forms that you might
consider more readable (like kshglob). Note that zsh also
supports perl regexp (with a builtin in a loadable module).
Why the fixation with avoiding external commands?
Post by Stephane Chazelas
Post by l***@gmail.com
Nope. The way I see it, '**' is a useful shorthand for
recursive globbing should be used mostly for simpler
matchings, find is more suitable for complicated expressions.
One could say that '*' match any sequence of characters
without a '/', while '**' matches ans sequence of characters.
Pretty intuitive to me.
[...]
The problem with find is that its output can't be postprocessed
reliably. zsh's globbing qualifiers definitely makes find
awkward and deprecated (though I agree find has its usefull
usages too).
**/*.c(-.mh-2OL[1,10])
selects the ten largest C files modified within the last two
hours, which you can't do with find.
ls -S (find . -mmin -120)|head
(or ls -S `find . -mmin -120`|head if you haven't switched to fish yet)

You use a pipeline containing find, ls and head. That's the beauty of
pipelines; use one tool to solve each part of the task, and string them
together in a pipeline to solve a nearly infinite number of problems
with a small set of simple tools.

Also notice that the find command is comparable in length to the zsh
version espacially if you use 'echo' to output it to stdout.
Post by Stephane Chazelas
--
Stephane
--
Axel
Stephane Chazelas
2006-03-09 14:54:37 UTC
Permalink
On 9 Mar 2006 05:17:32 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
In zsh, RE's * is #, RE's + is ##, RE's ? is (...|) (or
nothing). That makes zsh patterns to involve fewer keystrokes.
That also means the grouping can be implemented as (...).
Note that zsh implemented globbing attributes (like case
insensitive, approximate matching, activation of backreference
the same way as perl does: (#...) in zsh, (?...) in perl,
because # can't otherwise follow a ( in zsh the same way as ?
can't otherwise follow a ( in perl.
Isn't this just a new syntax for regular expressions?
It is. It is also one that is compatible (mostly) with historical
globbing and is more suitable for matching filenames than the
standard REs.
Post by l***@gmail.com
Why not simply
use grep to provide you with real regexps?
grep doesn't do filename expansion, and basic REs are not handy
to match filenames.
Post by l***@gmail.com
The unix philosphy of using
a set of orthogonal commands that do one thing each, and connecting
them using pipes is very powerful. Globbing is a _very_ useful hack
that breaks this philsophy by allowing you to perform some types of
very _simple_ matchings inside the shell. The above seems to make
globbing into this huge, hairy beast that can be used for almost
anything find, grep, head, tail, ls and sort do, with a syntax that is
as readable as Perl. What is the point of stuffing all this stuff into
the shell that can be implemented in a more readable and even more
poerful way using external commands?
Maybe because with a few keystrokes, you can do more reliably
than you could have done in a 20 line unreadable script.

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Also, at the prompt, you don't read what you type, it's
write-only. For scripts, zsh has longer forms that you might
consider more readable (like kshglob). Note that zsh also
supports perl regexp (with a builtin in a loadable module).
Why the fixation with avoiding external commands?
Yes, in that case, this doesn't make more sense. the pcre module
is sold as an "example module" though. It may come handy if you
don't have perl.

[...]
Post by l***@gmail.com
ls -S (find . -mmin -120)|head
(or ls -S `find . -mmin -120`|head if you haven't switched to fish yet)
ls -Sd and it doesn't select regular files (or symlinks to
regular file), and it makes the assumtion file paths don't have
newline or blank characters. And -S and -mmin are not Unix.
Post by l***@gmail.com
You use a pipeline containing find, ls and head. That's the beauty of
pipelines
But it's broken by design. Either Unix should prohibit all types
of non-alnum chars in filenames, or command substituion and all
the standard filtering utilities should be changed to be able to
cope with a NUL character separator (zsh is the only shell that
supports the NUL character (maybe fish does as well?)).
Post by l***@gmail.com
use one tool to solve each part of the task, and string them
together in a pipeline to solve a nearly infinite number of problems
with a small set of simple tools.
Yes, in theory it's nice. zsh gives you something more powerful
that doesn't use that framework. You're free to use grep and the
like with zsh. I made my choice ;).
Post by l***@gmail.com
Also notice that the find command is comparable in length to the zsh
version espacially if you use 'echo' to output it to stdout.
[...]

On a GNU system, the strict equivalent of zsh's would be:

eval "set -- $(find . -name '*.c' \( -type f -o -type l \) --exec \
find {} -follow -prune -type f -mmin -120 -printf '%s:\p\0' \; |
sort -rnz | awk -v'RS=\0' '
function escape(s) {
gsub(/'\''/, "'\''\\\'\'\''", s)
return "'\''" s "'\''"
}
{NR <= 10} {
sub(/[^:]*:/, "")
printf " %s", escape($0)
}')"

and there's no Unix equivalent.

Is that more readable or short or ortogonal?

It will probably not work with old versions of gawk.
--
Stephane
l***@gmail.com
2006-03-09 16:02:48 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
In zsh, RE's * is #, RE's + is ##, RE's ? is (...|) (or
nothing). That makes zsh patterns to involve fewer keystrokes.
That also means the grouping can be implemented as (...).
Note that zsh implemented globbing attributes (like case
insensitive, approximate matching, activation of backreference
the same way as perl does: (#...) in zsh, (?...) in perl,
because # can't otherwise follow a ( in zsh the same way as ?
can't otherwise follow a ( in perl.
Isn't this just a new syntax for regular expressions?
It is. It is also one that is compatible (mostly) with historical
globbing and is more suitable for matching filenames than the
standard REs.
How is it more suitable for filename matching?
Post by Stephane Chazelas
Post by l***@gmail.com
Why not simply
use grep to provide you with real regexps?
grep doesn't do filename expansion, and basic REs are not handy
to match filenames.
Grep uses find (or the shell, or some other program) to do the filename
matching. As to regex unsuitability for filename matching, I'd like to
see a justification of this.
Post by Stephane Chazelas
Post by l***@gmail.com
The unix philosphy of using
a set of orthogonal commands that do one thing each, and connecting
them using pipes is very powerful. Globbing is a _very_ useful hack
that breaks this philsophy by allowing you to perform some types of
very _simple_ matchings inside the shell. The above seems to make
globbing into this huge, hairy beast that can be used for almost
anything find, grep, head, tail, ls and sort do, with a syntax that is
as readable as Perl. What is the point of stuffing all this stuff into
the shell that can be implemented in a more readable and even more
poerful way using external commands?
Maybe because with a few keystrokes, you can do more reliably
than you could have done in a 20 line unreadable script.
How about writing a script that is only a few keystrokes long then?
(See below)
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Also, at the prompt, you don't read what you type, it's
write-only. For scripts, zsh has longer forms that you might
consider more readable (like kshglob). Note that zsh also
supports perl regexp (with a builtin in a loadable module).
Why the fixation with avoiding external commands?
Yes, in that case, this doesn't make more sense. the pcre module
is sold as an "example module" though. It may come handy if you
don't have perl.
[...]
Post by l***@gmail.com
ls -S (find . -mmin -120)|head
(or ls -S `find . -mmin -120`|head if you haven't switched to fish yet)
ls -Sd and it doesn't select regular files (or symlinks to
regular file), and it makes the assumtion file paths don't have
newline or blank characters. And -S and -mmin are not Unix.
Nor is zsh Unix. To get what you want you still have to install
non-standard commands. It is easy (and common) enough to install the
GNU toolchain on e.g. Solaris. You can do it in ~ if you have the
space. You can do a 'make dist' to various common architectures and
store the tarballs on a public ftp to make for a quick install on
almost any machine with internet access.

I do not see how using a nonstandard shell is more portable than using
nonstandard shell commands.

As to -S an -mmin, they are not standard, but they are not uncommon
either. E.g. FreeBSD implements both.

Handling blanks was an oversight on my part, the fish code handles this
correctly, the Posix version needs IFS fidling or some other solution.
Another nice way of solving this would be for ls to accept the list of
files to print from stdin, the same way most other unix commands do.
This should obviously only be used after detecting thet stdin is not a
tty. Newlines is a bit tougher, in my opinion the correct solution
would be to ban the use of newlines in filenames.
Post by Stephane Chazelas
Post by l***@gmail.com
You use a pipeline containing find, ls and head. That's the beauty of
pipelines
But it's broken by design. Either Unix should prohibit all types
of non-alnum chars in filenames, or command substituion and all
the standard filtering utilities should be changed to be able to
cope with a NUL character separator (zsh is the only shell that
supports the NUL character (maybe fish does as well?)).
That's a bit strongly worded, isn't it? Why is the idea of a pipeline
broken by design since most commands don't handle nulls in filenames?
The null problem is simply an unfortunate sideeffect of the choice of
string format in C, it is not a fundamental property of a pipe. And
unless I'm mistaken, the null character is illegal in filenames anyway,
since all filenames on Unix filesystems are specified to the operating
system using null-terminated strings.

Null handling seems to be a spot where zsh outshines fish though, fish
does not yet support nulls in strings. There is a pretty clear plan for
the changes needed for fish to do so, though.
Post by Stephane Chazelas
Post by l***@gmail.com
use one tool to solve each part of the task, and string them
together in a pipeline to solve a nearly infinite number of problems
with a small set of simple tools.
Yes, in theory it's nice. zsh gives you something more powerful
that doesn't use that framework. You're free to use grep and the
like with zsh. I made my choice ;).
Implementing what I would qualify as a hack (no offence meant) in the
shell globbing is a quick and easy way to solve the problem, but in my
opinion it makes the language special purposed and clumsy, it makes the
architechture unwieldly and it makes the codebase unmaintainable. Long
term, I think it's the wrong thing to do.

Lets fix bugs and add the missing features to the free versions of
common commands, so that anybody can get a working system anywhere
using any shell.
Post by Stephane Chazelas
Post by l***@gmail.com
Also notice that the find command is comparable in length to the zsh
version espacially if you use 'echo' to output it to stdout.
[...]
eval "set -- $(find . -name '*.c' \( -type f -o -type l \) --exec \
find {} -follow -prune -type f -mmin -120 -printf '%s:\p\0' \; |
sort -rnz | awk -v'RS=\0' '
function escape(s) {
gsub(/'\''/, "'\''\\\'\'\''", s)
return "'\''" s "'\''"
}
{NR <= 10} {
sub(/[^:]*:/, "")
printf " %s", escape($0)
}')"
and there's no Unix equivalent.
Well zsh is not Unix either, so I don't see that as relevant here.
Post by Stephane Chazelas
Is that more readable or short or ortogonal?
No, but I'm sure there are equaly complicated and unwieldly
zsh-specific ways of writing the same thing.

Instead of using the rather complicated code like the script you
provided, you can get any behaviour you want WRT symlinks, directories,
fifos, etc. by using the -type switch for find and the script from my
original mail. Fish correctly handles spaces and nulls are illegal in
filenames so these are no problem. The only outstanding issue is the
problem with newlines, where I would simply suggest implementing a
strict no newlines in filenames policy.
Post by Stephane Chazelas
It will probably not work with old versions of gawk.
--
Stephane
--
Axel
Stephane Chazelas
2006-03-09 17:46:04 UTC
Permalink
On 9 Mar 2006 08:02:48 -0800, ***@gmail.com wrote:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
It is. It is also one that is compatible (mostly) with historical
globbing and is more suitable for matching filenames than the
standard REs.
How is it more suitable for filename matching?
compare:

rm *.html

with

rm ^.*\.html$

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
grep doesn't do filename expansion, and basic REs are not handy
to match filenames.
Grep uses find (or the shell, or some other program) to do the filename
matching. As to regex unsuitability for filename matching, I'd like to
see a justification of this.
But for file paths (which can be any sequence of any byte but
NUL), you can't have them interact together. The standard text
utilities process lines, not file paths.

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Maybe because with a few keystrokes, you can do more reliably
than you could have done in a 20 line unreadable script.
How about writing a script that is only a few keystrokes long then?
Then you need the tools to do so. Unix doesn't have them.

[...]
Post by l***@gmail.com
I do not see how using a nonstandard shell is more portable than using
nonstandard shell commands.
I didn't say so. Your solution didn't work on Unix. Mine worked
on any Unix, as long as you use zsh, as I specified :-b
Post by l***@gmail.com
Handling blanks was an oversight on my part, the fish code handles this
correctly, the Posix version needs IFS fidling or some other solution.
Did fish fixed the POSIX (and also zsh) bug/misfeature where
every trailing newline characters was removed by command
substitution instead of only the last one?
Post by l***@gmail.com
Another nice way of solving this would be for ls to accept the list of
files to print from stdin, the same way most other unix commands do.
GNU (or BSD) xargs is there for that once you have a NUL
separated list of filenames.

... | xargs -r0 ls

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
But it's broken by design. Either Unix should prohibit all types
of non-alnum chars in filenames, or command substituion and all
the standard filtering utilities should be changed to be able to
cope with a NUL character separator (zsh is the only shell that
supports the NUL character (maybe fish does as well?)).
That's a bit strongly worded, isn't it? Why is the idea of a pipeline
broken by design since most commands don't handle nulls in filenames?
The null problem is simply an unfortunate sideeffect of the choice of
string format in C, it is not a fundamental property of a pipe. And
unless I'm mistaken, the null character is illegal in filenames anyway,
since all filenames on Unix filesystems are specified to the operating
system using null-terminated strings.
Null handling seems to be a spot where zsh outshines fish though, fish
does not yet support nulls in strings. There is a pretty clear plan for
the changes needed for fish to do so, though.
That's not what I meant. since LF (line feed) is allowed in a
filename, you can use tools that deal with LF separated records.
You need tools that can deal with NUL separater records (or use
some escaping technique or use /// as the separator...) as NUL
is the only byte that is not allowed in a filename.

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Yes, in theory it's nice. zsh gives you something more powerful
that doesn't use that framework. You're free to use grep and the
like with zsh. I made my choice ;).
Implementing what I would qualify as a hack (no offence meant) in the
shell globbing is a quick and easy way to solve the problem, but in my
opinion it makes the language special purposed and clumsy, it makes the
architechture unwieldly and it makes the codebase unmaintainable. Long
term, I think it's the wrong thing to do.
What's wrong with zsh regular expressions? What's wrong with the
globbing qualifiers?
Post by l***@gmail.com
Lets fix bugs and add the missing features to the free versions of
common commands, so that anybody can get a working system anywhere
using any shell.
You'd need to completely rething the existing toolset.

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
eval "set -- $(find . -name '*.c' \( -type f -o -type l \) --exec \
find {} -follow -prune -type f -mmin -120 -printf '%s:\p\0' \; |
sort -rnz | awk -v'RS=\0' '
function escape(s) {
gsub(/'\''/, "'\''\\\'\'\''", s)
return "'\''" s "'\''"
}
{NR <= 10} {
sub(/[^:]*:/, "")
printf " %s", escape($0)
}')"
and there's no Unix equivalent.
Well zsh is not Unix either, so I don't see that as relevant here.
You said zsh's approach was bad and that it was better to have
tools interact together to do that. I showed that such tools
don't exist, and that the closest you can come to is with using
non-standard features and the result is far from being as
readable (and readability was your point, not mine, I was saying
that zsh globbing is useful feature to have at the prompt).
Post by l***@gmail.com
Post by Stephane Chazelas
Is that more readable or short or ortogonal?
No, but I'm sure there are equaly complicated and unwieldly
zsh-specific ways of writing the same thing.
But can you think of any simpler way to write it without zsh. I
could with perl or python, not with shell tools (well, AT&T's
tw (tree walker) might work but it's very bogus).
Post by l***@gmail.com
Instead of using the rather complicated code like the script you
provided, you can get any behaviour you want WRT symlinks, directories,
fifos, etc. by using the -type switch for find and the script from my
original mail.
No, I don't know of any find implementation that is able to tell
you what is the end type of a symlink without using -L (formerly
-follow), and -follow causes you to descend into symlinks to
directories, which you generally don't want to do.

find . -type l -print

reports symlinks to regular files but also to non-regular files.

You need:

find . -exec test -f {} \; -print

If you want regular files or symlink to regular files.

Same -printf '%s\n' (GNU specific) reports the size of the
symlink, not the file it points to (unless you use -L (formerly
-follow).
--
Stephane
l***@gmail.com
2006-03-10 00:41:32 UTC
Permalink
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
It is. It is also one that is compatible (mostly) with historical
globbing and is more suitable for matching filenames than the
standard REs.
How is it more suitable for filename matching?
rm *.html
with
rm ^.*\.html$
Well, \.html$ is also equivalent and much shorter to boot. But what I
meant was that while globbing has it's place, in what way are the weird
#-based regexps zsh has better than regular regexps.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
grep doesn't do filename expansion, and basic REs are not handy
to match filenames.
Grep uses find (or the shell, or some other program) to do the filename
matching. As to regex unsuitability for filename matching, I'd like to
see a justification of this.
But for file paths (which can be any sequence of any byte but
NUL), you can't have them interact together. The standard text
utilities process lines, not file paths.
Which is why newlines should be avoided in filenames. Not a big issue.
Notice how file managers usually interpret pressing return when
renaming a file as 'done', i.e. you can't use a normal filemanager to
name a file with a newline. I've seen more than a few that wouldn't
correctly display them either. Face it, filenames with newlines should
simply be considered a bug.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Maybe because with a few keystrokes, you can do more reliably
than you could have done in a 20 line unreadable script.
How about writing a script that is only a few keystrokes long then?
Then you need the tools to do so. Unix doesn't have them.
No, but Unix barely exists anyway. BSD and the GNU system do.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
I do not see how using a nonstandard shell is more portable than using
nonstandard shell commands.
I didn't say so. Your solution didn't work on Unix. Mine worked
on any Unix, as long as you use zsh, as I specified :-b
My solution works on any Unix so long as you have a modern commandline
toolchain installed, either GNU or BSD will work, both are open source
and freely downloadable over the net.
Post by Stephane Chazelas
Post by l***@gmail.com
Handling blanks was an oversight on my part, the fish code handles this
correctly, the Posix version needs IFS fidling or some other solution.
Did fish fixed the POSIX (and also zsh) bug/misfeature where
every trailing newline characters was removed by command
substitution instead of only the last one?
Yes.
Post by Stephane Chazelas
Post by l***@gmail.com
Another nice way of solving this would be for ls to accept the list of
files to print from stdin, the same way most other unix commands do.
GNU (or BSD) xargs is there for that once you have a NUL
separated list of filenames.
True, but it's a hack IMO.
Post by Stephane Chazelas
... | xargs -r0 ls
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
But it's broken by design. Either Unix should prohibit all types
of non-alnum chars in filenames, or command substituion and all
the standard filtering utilities should be changed to be able to
cope with a NUL character separator (zsh is the only shell that
supports the NUL character (maybe fish does as well?)).
That's a bit strongly worded, isn't it? Why is the idea of a pipeline
broken by design since most commands don't handle nulls in filenames?
The null problem is simply an unfortunate sideeffect of the choice of
string format in C, it is not a fundamental property of a pipe. And
unless I'm mistaken, the null character is illegal in filenames anyway,
since all filenames on Unix filesystems are specified to the operating
system using null-terminated strings.
Null handling seems to be a spot where zsh outshines fish though, fish
does not yet support nulls in strings. There is a pretty clear plan for
the changes needed for fish to do so, though.
That's not what I meant. since LF (line feed) is allowed in a
filename, you can use tools that deal with LF separated records.
You need tools that can deal with NUL separater records (or use
some escaping technique or use /// as the separator...) as NUL
is the only byte that is not allowed in a filename.
Oh, ok. That is interesting. One could add NULL support on a command by
command basis, and use an environment variable to tell programs to use
NULL by default. That way, this could be done in a backward/forward
compatible fashion without any changes to scripts.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Yes, in theory it's nice. zsh gives you something more powerful
that doesn't use that framework. You're free to use grep and the
like with zsh. I made my choice ;).
Implementing what I would qualify as a hack (no offence meant) in the
shell globbing is a quick and easy way to solve the problem, but in my
opinion it makes the language special purposed and clumsy, it makes the
architechture unwieldly and it makes the codebase unmaintainable. Long
term, I think it's the wrong thing to do.
What's wrong with zsh regular expressions? What's wrong with the
globbing qualifiers?
* They implement partial regular expressions but with a different
syntax, and it doesn't even seem to be a better syntax. So you have two
languages to do one same thing.
* It's much harder to replace one syntax with another in a few years
since it's hardcoded into the shell.
* They make the shell a monolith instead of a modular set of tools.
* Bug severity escalation. A crash bug in the regexp code not only
breaks the program, it crashes the whole shell
* It's harder to find help on syntax magic than on actual commands
Post by Stephane Chazelas
Post by l***@gmail.com
Lets fix bugs and add the missing features to the free versions of
common commands, so that anybody can get a working system anywhere
using any shell.
You'd need to completely rething the existing toolset.
Why? Using nulls does not require such a thing, but even without doing
that, you can simply avoid newlines in filenames. I see very little
evidence that the entire Unix toolset is fundamentally broken. There
are issues but I see now proof that these issues can't be solved with
the current tools, either by disallowing newlines in filenames or by
making commands use a null byte as a record separator. Both should be
possible to do in a transitional manner over time.
Post by Stephane Chazelas
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
eval "set -- $(find . -name '*.c' \( -type f -o -type l \) --exec \
find {} -follow -prune -type f -mmin -120 -printf '%s:\p\0' \; |
sort -rnz | awk -v'RS=\0' '
function escape(s) {
gsub(/'\''/, "'\''\\\'\'\''", s)
return "'\''" s "'\''"
}
{NR <= 10} {
sub(/[^:]*:/, "")
printf " %s", escape($0)
}')"
and there's no Unix equivalent.
Well zsh is not Unix either, so I don't see that as relevant here.
You said zsh's approach was bad and that it was better to have
tools interact together to do that. I showed that such tools
don't exist, and that the closest you can come to is with using
non-standard features and the result is far from being as
readable (and readability was your point, not mine, I was saying
that zsh globbing is useful feature to have at the prompt).
Heh. I'm advocating fish + GNU or bsd coreutils, you're advocating zsh.
Neither one is standard Unix, so we should just let that type of
arguments rest, since they clearly don't belong here. I did not mean to
give the impression that I was advocating using e.g. only Posix or only
BSD4.2 tools, I though it was obvious since I'm advocating the use of a
non-posix shell.
Post by Stephane Chazelas
Post by l***@gmail.com
Post by Stephane Chazelas
Is that more readable or short or ortogonal?
No, but I'm sure there are equaly complicated and unwieldly
zsh-specific ways of writing the same thing.
But can you think of any simpler way to write it without zsh. I
could with perl or python, not with shell tools (well, AT&T's
tw (tree walker) might work but it's very bogus).
I think the issues you have are both minor and fixable. The Gnu
commands are not set in stone and forever unchanging. A few weeks ago I
submitted a patch to the sed maintainers to add a -E switch to sed for
switching on extended regexps, since -E is the switch used by both gnu
grep and BSD sed, and unless I misunderstood the sed maintainers, my
patch was accepted.

If there are problems, let's fix them in the right way at the right
place.
Post by Stephane Chazelas
Post by l***@gmail.com
Instead of using the rather complicated code like the script you
provided, you can get any behaviour you want WRT symlinks, directories,
fifos, etc. by using the -type switch for find and the script from my
original mail.
No, I don't know of any find implementation that is able to tell
you what is the end type of a symlink without using -L (formerly
-follow), and -follow causes you to descend into symlinks to
directories, which you generally don't want to do.
find . -type l -print
reports symlinks to regular files but also to non-regular files.
I don't get it, you'll have to run this by me again. find's default
behavior is to list symlinks to files but not to directories, at least
theat is what GNU find does on my machine. To neither list nor follow
symlinks, use -type f. You can then instruct ls in the next step of the
pipeline whether to stat the real file or the one pointed to by any
symlinks outputed by find. Where is the problem?
Post by Stephane Chazelas
find . -exec test -f {} \; -print
If you want regular files or symlink to regular files.
Same -printf '%s\n' (GNU specific) reports the size of the
symlink, not the file it points to (unless you use -L (formerly
-follow).
--
Stephane
--
Axel
Stephane CHAZELAS
2006-03-10 09:01:49 UTC
Permalink
2006-03-9, 16:41(-08), ***@gmail.com:
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
rm *.html
with
rm ^.*\.html$
Well, \.html$ is also equivalent and much shorter to boot. But what I
meant was that while globbing has it's place
good point.
Post by l***@gmail.com
in what way are the weird
#-based regexps zsh has better than regular regexps.
Because you can do:

rm ./*.html

as before, and if you want to do anything more complicated (like
removing filnames with only digits, you can do it).

rm [a-z]##-<12-20>.jpg

for instance.

I do that very often, and I'm glad zsh has that feature so that
I don't have to remove the files individually (no, I'm not going
to try and run ls | awk -F'[-.]' '/^[a-z]+-[0-9]+\.jpg$/ && $2
Post by l***@gmail.com
= 12 $2 <= 20 {system("rm " $0)}'
See also the usage of zsh globbing combined with zmv (see
group:comp.unix.shell zmv on groups.google for examples).

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
How about writing a script that is only a few keystrokes long then?
Then you need the tools to do so. Unix doesn't have them.
No, but Unix barely exists anyway. BSD and the GNU system do.
You mean in home computing maybe? (which is fine, I kind of
understand that fish is targetting home users).

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
I didn't say so. Your solution didn't work on Unix. Mine worked
on any Unix, as long as you use zsh, as I specified :-b
My solution works on any Unix so long as you have a modern commandline
toolchain installed, either GNU or BSD will work, both are open source
and freely downloadable over the net.
Other Unices may not have a GNU or BSD tool chain, but it should
be noted that those utilities are not the most important feature
of an OS. Many commercial Unices are better at this or that things
than BSD or Linux that make them more attrative for this or that
purpose (an obvious reason is when one software has only been
ported to that Unix). Installing GNU toolchain is not always an
option.

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Did fish fixed the POSIX (and also zsh) bug/misfeature where
every trailing newline characters was removed by command
substitution instead of only the last one?
Yes.
Good. I think it may be the first one to fix that. Well rc
either did word splitting or didn't trip any.

[...]
Post by l***@gmail.com
* They implement partial regular expressions but with a different
syntax, and it doesn't even seem to be a better syntax. So you have two
languages to do one same thing.
It's not partial. It's missing one important operator wrt the
standard REs: the {n,m}.

REs and globbing are incompatible, zsh had the choice of either
drop the globbing altogether and use REs instead or extend the
globbing. The $, (, { RE operators are also special in
shells. So switching to REs would have meant breaking even more
the Bourne syntax.

All Bourne like shells have gone with extending their globbing
to add RE features. ksh93 even has a way to convert from its
globbing to REs (its REs). bash3 has introduced a [[ ... ]] RE
matching operator but still uses wildcards for globbing.
Post by l***@gmail.com
* It's much harder to replace one syntax with another in a few years
since it's hardcoded into the shell.
I don't follow you. What do you mean.
Post by l***@gmail.com
* They make the shell a monolith instead of a modular set of tools.
* Bug severity escalation. A crash bug in the regexp code not only
breaks the program, it crashes the whole shell
* It's harder to find help on syntax magic than on actual commands
info zsh
iFil<Tab><Tab>

Filename Generation

Everything is there.

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
You'd need to completely rething the existing toolset.
Why? Using nulls does not require such a thing, but even without doing
that, you can simply avoid newlines in filenames.
If it's there I can't. If the system doesn't prohibit them, they
I can't help them being there. I can choose not to use them, so
I can be safe when processing my own files (though sometimes an
error with copy-pasting may cause me to create such files). But
I can't write scripts that process foreign files and make the
same assumption. I've seen many tmp cleaning scripts (even
system ones like on HPUX) that enabled users to have root delete
any file on the system (like /etc/passwd).


[...]
Post by l***@gmail.com
Post by Stephane Chazelas
You said zsh's approach was bad and that it was better to have
tools interact together to do that. I showed that such tools
don't exist, and that the closest you can come to is with using
non-standard features and the result is far from being as
readable (and readability was your point, not mine, I was saying
that zsh globbing is useful feature to have at the prompt).
Heh. I'm advocating fish + GNU or bsd coreutils, you're advocating zsh.
The first solution I provided was zsh, the longer one was
fish+GNU (I used a sh syntax because I don't know the fish
syntax, but I'm pretty sure you would have to do something very
similar with fish). The solution you provided didn't answer the
question and had flaws.

So, now, please show me a fish+GNU solution to that very simple
problem: count the number of lines in the 10 largest (by size) C
(with .c extension) regular files (for a symlink, the target of
the link should be considered) in the current directory and its
sub-directories (ommitting dotfiles, which I forgot to take care
of in my fish+GNU solution), modified within the last 2 hours,
without making any assumption on the format of the filenames
(except that they don't start with "." and end in ".c").

which would demonstrate that zsh was wrong to allow one to do
things like:
wc -l -- **/*.c(-.mh-2OL[1,10])

The only standard tool to walk a directory tree has very limited
functionalities. That's probably why AT&T came up with that "tw"
of their own (you may be interesting in having a look at their
alternative toolchest by the way) with a awk like syntax.
Post by l***@gmail.com
Neither one is standard Unix, so we should just let that type of
arguments rest, since they clearly don't belong here. I did not mean to
give the impression that I was advocating using e.g. only Posix or only
BSD4.2 tools, I though it was obvious since I'm advocating the use of a
non-posix shell.
That was not my point.

[...]
Post by l***@gmail.com
Post by Stephane Chazelas
find . -type l -print
reports symlinks to regular files but also to non-regular files.
I don't get it, you'll have to run this by me again. find's default
behavior is to list symlinks to files but not to directories, at least
theat is what GNU find does on my machine. To neither list nor follow
symlinks, use -type f. You can then instruct ls in the next step of the
pipeline whether to stat the real file or the one pointed to by any
symlinks outputed by find. Where is the problem?
[...]

$ ls -lF
total 12
-rw-r--r-- 1 chazelas chazelas 21 Mar 10 08:51 a.c
drwxr-xr-x 2 chazelas chazelas 4096 Mar 10 08:51 b.c/
prw-r--r-- 1 chazelas chazelas 0 Mar 10 08:51 c.c|
lrwxrwxrwx 1 chazelas chazelas 8 Mar 10 08:52 d.c -> d.c.temp
-rw-r--r-- 1 chazelas chazelas 4 Mar 10 08:52 d.c.temp
-rw-r--r-- 1 chazelas chazelas 2 Mar 10 08:52 e.c
lrwxrwxrwx 1 chazelas chazelas 8 Mar 10 08:52 f.c -> ..

I want the two largest C files (d.c (4 bytes) and a.c (21 bytes)).

I only want regular files (their target for symlinks), how should I do?

find . -type f

will ommit d.c

find . -follow -type f

will not, but then, it will recurse into ".."

So, I need to do:

find . -exec test -f {} \;

But then, if I want the size, I can use -printf "%s\n" (%s for
size), but then again without -follow, I get 8 instead of 4 for
d.c.

That's why I used:

find . \( -type f -o -type l \) -exec \
find {} -prune -follow -type f -printf '%s:%f\0' \;
--
Stéphane
l***@gmail.com
2006-03-10 19:50:03 UTC
Permalink
[...]
Post by Stephane CHAZELAS
Post by l***@gmail.com
in what way are the weird
#-based regexps zsh has better than regular regexps.
rm ./*.html
as before, and if you want to do anything more complicated (like
removing filnames with only digits, you can do it).
rm [a-z]##-<12-20>.jpg
for instance.
So use globbing for the easy things like *.html, and use real regexps
for the complicated things, i.e.

rm (ls|grep '[0-9]*.jpg')

Trying to extend wildcards to support the full power of regexps feels
like making an octopus by nailing extra legs to a dachsen. (Hi C++
lovers)
Post by Stephane CHAZELAS
I do that very often, and I'm glad zsh has that feature so that
I don't have to remove the files individually (no, I'm not going
to try and run ls | awk -F'[-.]' '/^[a-z]+-[0-9]+\.jpg$/ && $2
Post by l***@gmail.com
= 12 $2 <= 20 {system("rm " $0)}'
See also the usage of zsh globbing combined with zmv (see
group:comp.unix.shell zmv on groups.google for examples).
I'll try to look into it this weekend.
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Post by l***@gmail.com
How about writing a script that is only a few keystrokes long then?
Then you need the tools to do so. Unix doesn't have them.
No, but Unix barely exists anyway. BSD and the GNU system do.
You mean in home computing maybe? (which is fine, I kind of
understand that fish is targetting home users).
No, I meant that AT&T Unix no longer exists, and SCO isn't really a
dominant actor either. Solaris, AIX and friends all extend standard
Unix in incompatible ways, much like BSD does.

Posix is not universally implemented either. If I remember correctly,
the Solaris toolchain is not Posix compatible, but they ship a separate
toolchain that you use by manually chaning you path that is, for
example.
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
I didn't say so. Your solution didn't work on Unix. Mine worked
on any Unix, as long as you use zsh, as I specified :-b
My solution works on any Unix so long as you have a modern commandline
toolchain installed, either GNU or BSD will work, both are open source
and freely downloadable over the net.
Other Unices may not have a GNU or BSD tool chain, but it should
be noted that those utilities are not the most important feature
of an OS. Many commercial Unices are better at this or that things
than BSD or Linux that make them more attrative for this or that
purpose (an obvious reason is when one software has only been
ported to that Unix). Installing GNU toolchain is not always an
option.
See above. I didn't mean that everybody should use BSD, only that if
they want to use a sane set of tools for useful scripting, they are
free (in more sense than one) to do so.
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
Did fish fixed the POSIX (and also zsh) bug/misfeature where
every trailing newline characters was removed by command
substitution instead of only the last one?
Yes.
Good. I think it may be the first one to fix that. Well rc
either did word splitting or didn't trip any.
There is one issue though. Currently the word splitting character is
hardcoded to '\n'. I don't use IFS since it usually contains space and
tab, which is rarely what you want. This way, one can do things like

for i in (cat /etc/passwd);
...
end

which is nice. But I'm sure there are situations where this is
undesirable. The obvious solution would be to use a separate variable,
e.g. $CSFS (command substitution field separators), that by default
only contained \n. But that's silly since long term it would be nice to
be able to support NULL as a separator character, and nulls can't be
used in environment variables, at least not when exporting them. I
haven't given this much thought, but I don't know what the proper
solution is yet.
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
* They implement partial regular expressions but with a different
syntax, and it doesn't even seem to be a better syntax. So you have two
languages to do one same thing.
It's not partial. It's missing one important operator wrt the
standard REs: the {n,m}.
REs and globbing are incompatible, zsh had the choice of either
drop the globbing altogether and use REs instead or extend the
globbing. The $, (, { RE operators are also special in
shells. So switching to REs would have meant breaking even more
the Bourne syntax.
All Bourne like shells have gone with extending their globbing
to add RE features. ksh93 even has a way to convert from its
globbing to REs (its REs). bash3 has introduced a [[ ... ]] RE
matching operator but still uses wildcards for globbing.
I would propose that regexps, extended regexps and future replacements
for regexps all be provided through external commands instead. In the
places where it is currently cumbersome to use external commands, the
syntax for those commands simply need to be fixed.
Post by Stephane CHAZELAS
Post by l***@gmail.com
* It's much harder to replace one syntax with another in a few years
since it's hardcoded into the shell.
I don't follow you. What do you mean.
We can't get rid of wildcards, since they are a core part of the shell.
But if something better than regexps comes along, we can simply include
more commands into the standard toolchest for using this new
all-poerfull matching language, and we won't have any problems with
regexps using up the few operators we have, since regexps live in a
different command. If I'm not mistken, sed, grep and other regexp-based
where not a part of the earliest Unix versions.
Post by Stephane CHAZELAS
Post by l***@gmail.com
* They make the shell a monolith instead of a modular set of tools.
* Bug severity escalation. A crash bug in the regexp code not only
breaks the program, it crashes the whole shell
* It's harder to find help on syntax magic than on actual commands
info zsh
iFil<Tab><Tab>
Filename Generation
Everything is there.
Yep, but I would guess that only a few percent of all shell users are
well versed enough in using info to find the right section of the
manual with any sort of ease, whereas most shell users know how to use
man.

Even worse, if you want to understand a piece of shellscript that looks
like this:

echo #¤%~*\/&%((¤&/%(bs)&%[hsdr]#¤<%&j>65¤&K34))2K3

where do you begin? What should you search for?

On the other hand, a piece of shellscript that looks like this:

fnurb -h bleh|groink -g -q|snorp --foo|gleb -s glubb blipp|boing

is perfectly undeciphrable in itself, but you know exactly where to
look when finding out what each part does.
Post by Stephane CHAZELAS
[...]
Post by l***@gmail.com
Post by Stephane Chazelas
You'd need to completely rething the existing toolset.
Why? Using nulls does not require such a thing, but even without doing
that, you can simply avoid newlines in filenames.
If it's there I can't. If the system doesn't prohibit them, they
I can't help them being there. I can choose not to use them, so
I can be safe when processing my own files (though sometimes an
error with copy-pasting may cause me to create such files). But
I can't write scripts that process foreign files and make the
same assumption. I've seen many tmp cleaning scripts (even
system ones like on HPUX) that enabled users to have root delete
any file on the system (like /etc/passwd).
True enough, there are situations where you can't make that kind of
assumptions. Obviously the unix toolset should be extended to handle
such situations correctly. This does not mean those tools are
fundamentally broken, however. Name one reason why the current Unix
tools could not be modified in a sane, backwards compatible way that
would use nulls as a field separators.

[...]
Post by Stephane CHAZELAS
Post by l***@gmail.com
Heh. I'm advocating fish + GNU or bsd coreutils, you're advocating zsh.
The first solution I provided was zsh, the longer one was
fish+GNU (I used a sh syntax because I don't know the fish
syntax, but I'm pretty sure you would have to do something very
similar with fish). The solution you provided didn't answer the
question and had flaws.
So, now, please show me a fish+GNU solution to that very simple
problem: count the number of lines in the 10 largest (by size) C
(with .c extension) regular files (for a symlink, the target of
the link should be considered) in the current directory and its
sub-directories (ommitting dotfiles, which I forgot to take care
of in my fish+GNU solution), modified within the last 2 hours,
without making any assumption on the format of the filenames
(except that they don't start with "." and end in ".c").
which would demonstrate that zsh was wrong to allow one to do
wc -l -- **/*.c(-.mh-2OL[1,10])
I give one solution to the symlink problem below and outline another.
The other requirements are simple to fulfill using head, find and
friends.
Post by Stephane CHAZELAS
The only standard tool to walk a directory tree has very limited
functionalities. That's probably why AT&T came up with that "tw"
of their own (you may be interesting in having a look at their
alternative toolchest by the way) with a awk like syntax.
Is it open source? Whare can I find documentation?

[...]
Post by Stephane CHAZELAS
Post by l***@gmail.com
Post by Stephane Chazelas
find . -type l -print
reports symlinks to regular files but also to non-regular files.
I don't get it, you'll have to run this by me again. find's default
behavior is to list symlinks to files but not to directories, at least
theat is what GNU find does on my machine. To neither list nor follow
symlinks, use -type f. You can then instruct ls in the next step of the
pipeline whether to stat the real file or the one pointed to by any
symlinks outputed by find. Where is the problem?
[...]
$ ls -lF
total 12
-rw-r--r-- 1 chazelas chazelas 21 Mar 10 08:51 a.c
drwxr-xr-x 2 chazelas chazelas 4096 Mar 10 08:51 b.c/
prw-r--r-- 1 chazelas chazelas 0 Mar 10 08:51 c.c|
lrwxrwxrwx 1 chazelas chazelas 8 Mar 10 08:52 d.c -> d.c.temp
-rw-r--r-- 1 chazelas chazelas 4 Mar 10 08:52 d.c.temp
-rw-r--r-- 1 chazelas chazelas 2 Mar 10 08:52 e.c
lrwxrwxrwx 1 chazelas chazelas 8 Mar 10 08:52 f.c -> ..
I want the two largest C files (d.c (4 bytes) and a.c (21 bytes)).
I only want regular files (their target for symlinks), how should I do?
find . -type f
will ommit d.c
find . -follow -type f
will not, but then, it will recurse into ".."
find . -exec test -f {} \;
But then, if I want the size, I can use -printf "%s\n" (%s for
size), but then again without -follow, I get 8 instead of 4 for
d.c.
find . \( -type f -o -type l \) -exec \
find {} -prune -follow -type f -printf '%s:%f\0' \;
Ok. That gets a little bit complicated in fish, since there is no
trivial way to pipe things through 'test' which is what you want here.
So I made a tiny general purpose wrapper that allows you to use simple
such tests in a pipeline:

function tf; set -l IFS \n; while read -l fn; if test $argv $fn; echo
$fn; end;end;end

tf is short for 'test filter', it's usage should be pretty obvious.

Using this function, we can simply use

ls -S (find . -name "*.c"|tf -f)|head -n 2

which does what you want. Ideally, ls would support reading input files
thorugh stdin, in which case we could write

find . -name "*.c"|tf -f|ls -S|head -n 2

which is even clearer and easier to read.

The tf function is useful for many general purpose test, e.g. you can
use it to filter files on properties, strings on length, integers on
size, etc.. One could probably do an ugly one-time hack using eval as
well.
Post by Stephane CHAZELAS
--
Stéphane
--
Axel
William Park
2006-03-10 00:53:57 UTC
Permalink
Post by l***@gmail.com
Since the dawn of time, clueless users have asked questions like 'how
can I change an environment varible in another running process?' The
answer has always been variations of 'You don't.' No longer so in fish.
Fish supports universal variables, which are variables whose value is
shared between all running fish instances with the specified user on
the specified machine. Universal variables are automatically saved
between reboots and shutdowns, so you don't need to put their values in
an init file. Universal variables have the outermost scope, meaning the
will never get used in preference of shell-specific variables, which
should minimize security implications.
Universal variables make it much more practical to use environment
variables for configuration options. Youy simply change an environemnt
variable in one shell, and the change will propagate to all sunning
shells, and it will be saved so that the new value is used after a
reboot as well. One example of environemnt variables in action can be
had by launching two fish instances in separate terminals side-by-side.
The issue the command 'set fish_color_cwd blue' and the color of the
current working directory element of the prompt will change color to
blue in both shells. Using universal variables makes it much more
convenient to set configuration options like $BROWSER, $PAGER and
$CDPATH.
What if you want to isolate one instance from the rest of Fish
instances? Can you specify the normal behaviour of shell?
--
William Park <***@yahoo.ca>, Toronto, Canada
ThinFlash: Linux thin-client on USB key (flash) drive
http://home.eol.ca/~parkw/thinflash.html
BashDiff: Super Bash shell
http://freshmeat.net/projects/bashdiff/
l***@gmail.com
2006-03-10 01:06:16 UTC
Permalink
Post by William Park
Post by l***@gmail.com
Since the dawn of time, clueless users have asked questions like 'how
can I change an environment varible in another running process?' The
answer has always been variations of 'You don't.' No longer so in fish.
Fish supports universal variables, which are variables whose value is
shared between all running fish instances with the specified user on
the specified machine. Universal variables are automatically saved
between reboots and shutdowns, so you don't need to put their values in
an init file. Universal variables have the outermost scope, meaning the
will never get used in preference of shell-specific variables, which
should minimize security implications.
Universal variables make it much more practical to use environment
variables for configuration options. Youy simply change an environemnt
variable in one shell, and the change will propagate to all sunning
shells, and it will be saved so that the new value is used after a
reboot as well. One example of environemnt variables in action can be
had by launching two fish instances in separate terminals side-by-side.
The issue the command 'set fish_color_cwd blue' and the color of the
current working directory element of the prompt will change color to
blue in both shells. Using universal variables makes it much more
convenient to set configuration options like $BROWSER, $PAGER and
$CDPATH.
What if you want to isolate one instance from the rest of Fish
instances? Can you specify the normal behaviour of shell?
Yes. When setting a variable (using the 'set' builtin command) you can
always manually specify a scope, so if you set the scope to global (-g)
when defining the value of a variable that already exists in the
universal scope, then that new value will be assigned to a global
variable with the same name and the universal variable is never
touched, and since global variables override universal variables, that
one shell will have a different value than all others.

If you do not specify a scope, then the scope of the innermost existing
variable is assumed. If no variable currently exists, then
function-local scope is assumed in a function, or global scope, if not
in a function.
Post by William Park
--
ThinFlash: Linux thin-client on USB key (flash) drive
http://home.eol.ca/~parkw/thinflash.html
BashDiff: Super Bash shell
http://freshmeat.net/projects/bashdiff/
--
Axel
Loading...