Discussion:
Different variable assignments
(too old to reply)
Frank Winkler
2024-10-11 18:11:57 UTC
Permalink
Hi there !

Consider the following commands:


$ var1=`uname -sr`
$ echo $var1
Darwin 24.0.0
$ read var2 <<< `uname -sr`
$ echo $var2
Darwin 24.0.0
$ uname -sr | read var3
$ echo $var3

$ uname -sr | read -p var3
$ echo $var3

$

While the first two ones behave like expected, I wonder why the latter
ones fail. What's the difference behind the scenes?

And even more confusing, why does this familiar one work anyway?

$ sw_vers | while read line; do echo $line; done
ProductName: macOS
ProductVersion: 15.0.1
BuildVersion: 24A348
$

I've been using commands like that one for a very long time and that's
why I tried the simple "read" above - with no success.

How can I do such an assignment at the end of a command instead of the
beginning? Any ideas?

TIA

fw
John-Paul Stewart
2024-10-11 18:27:40 UTC
Permalink
Post by Frank Winkler
Hi there !
$ var1=`uname -sr`
$ echo $var1
Darwin 24.0.0
$ read var2 <<< `uname -sr`
$ echo $var2
Darwin 24.0.0
$ uname -sr | read var3
$ echo $var3
$ uname -sr | read -p var3
$ echo $var3
$
While the first two ones behave like expected, I wonder why the latter
ones fail. What's the difference behind the scenes?
I don't know about other shells, but in Bash each command in a pipeline
is run in a subshell. (See the "Pipelines" section of the Bash man
page.) Thus you're doing the 'read var3' part in a different shell than
where 'echo $var3' runs. That's why it is empty when you echo it.
Post by Frank Winkler
And even more confusing, why does this familiar one work anyway?
$ sw_vers | while read line; do echo $line; done
ProductName: macOS
ProductVersion: 15.0.1
BuildVersion: 24A348
Here the subshell runs everything between 'while' and 'done' so the read
and echo commands are in the same (sub)shell this time.
Frank Winkler
2024-10-11 18:45:07 UTC
Permalink
Post by John-Paul Stewart
I don't know about other shells, but in Bash each command in a pipeline
is run in a subshell. (See the "Pipelines" section of the Bash man
page.) Thus you're doing the 'read var3' part in a different shell than
where 'echo $var3' runs. That's why it is empty when you echo it.
That sounds very plausible - thanks for enlighting! :)
So this is not a "read" issue but rather a matter of shell instance and
hence there's no way to do the assignment at the end?

Regards

fw
Janis Papanagnou
2024-10-12 01:59:49 UTC
Permalink
Post by Frank Winkler
Post by John-Paul Stewart
I don't know about other shells, but in Bash each command in a pipeline
is run in a subshell. (See the "Pipelines" section of the Bash man
page.) Thus you're doing the 'read var3' part in a different shell than
where 'echo $var3' runs. That's why it is empty when you echo it.
That sounds very plausible - thanks for enlighting! :)
So this is not a "read" issue but rather a matter of shell instance and
hence there's no way to do the assignment at the end?
It depends on your shell. If you choose Kornshell you won't have that
issue and can write it as you've done with the output as you'd expect.

$ uname -a | read var
$ echo "$var"
Linux [...snip...]

The reason is that the last command in a pipeline will (in Kornshell)
be executed in the "current" shell context.

(In other shells you have to work around the issue as demonstrated in
other answers to your post. Some workaround are more clumsy some less.
A shorter variant of the here-document posted elsethread can be using
here-strings

$ read var <<< $(uname -a)

another method is using process substitution and redirection

$ read var < <(uname -a)

Both supported by shells like ksh, bash, zsh, but non-standard as are
some other workaround proposals that use bash-specifics like 'coproc',
that doesn't work as widely as using '<<<' or '<(...)' do.)

Janis
Lawrence D'Oliveiro
2024-10-12 02:26:44 UTC
Permalink
... use bash-specifics like 'coproc' ...
It isn’t bash-specific.
Kenny McCormack
2024-10-19 10:45:43 UTC
Permalink
... use bash-specifics like 'coproc' ...
It isn't bash-specific.
People on these newsgroups often use phrases like "bash specific" as
synonyms for "Not strictly POSIX" (*), even though the bash feature under
discussion is also found in other shells. In fact, many bash-isms,
including "coproc", came originally from ksh. I'm sure Janis knows this.
--
The key difference between faith and science is that in science, evidence that
doesn't fit the theory tends to weaken the theory (that is, make it less likely to
be believed), whereas in faith, contrary evidence just makes faith stronger (on
the assumption that Satan is testing you - trying to make you abandon your faith).
Janis Papanagnou
2024-10-19 12:25:14 UTC
Permalink
Post by Kenny McCormack
... use bash-specifics like 'coproc' ...
It isn't bash-specific.
Maybe; I haven't checked all existing shells. I know that the keyword
is not used in Kornshell. I know it's used in bash. I don't know, e.g.,
about zsh, the other major shell I'm also interested in.
Post by Kenny McCormack
People on these newsgroups often use phrases like "bash specific" as
synonyms for "Not strictly POSIX" (*), even though the bash feature under
discussion is also found in other shells. In fact, many bash-isms,
including "coproc", came originally from ksh. I'm sure Janis knows this.
Please note that while ksh supports co-processes it doesn't use (to my
knowledge) the keyword 'coproc'. - Kornshells co-processes are invoked
by appending the '|&' token to a command and reads and writes are done
with 'read -p' and 'print -p', respectively.

Janis
Kenny McCormack
2024-10-19 13:39:14 UTC
Permalink
In article <vf08fi$3sf5e$***@dont-email.me>,
Janis Papanagnou <janis_papanagnou+***@hotmail.com> wrote:
...
Post by Janis Papanagnou
Please note that while ksh supports co-processes it doesn't use (to my
knowledge) the keyword 'coproc'. - Kornshells co-processes are invoked
by appending the '|&' token to a command and reads and writes are done
with 'read -p' and 'print -p', respectively.
Seems to be pretty much the same thing, but with a slightly different
notation (|& vs. "coproc"). I think the original bash designers wanted to
be at least sort of "csh compatible", so they took |& from csh to mean
"merge stdout and stderr", so had to come up with something else for
coprocs.

Anyway, all I know about ksh is basically what I've read from various bash
sources.
--
"We are in the beginning of a mass extinction, and all you can talk
about is money and fairy tales of eternal economic growth."

- Greta Thunberg -
Janis Papanagnou
2024-10-19 13:57:01 UTC
Permalink
Post by Kenny McCormack
...
Post by Janis Papanagnou
Please note that while ksh supports co-processes it doesn't use (to my
knowledge) the keyword 'coproc'. - Kornshells co-processes are invoked
by appending the '|&' token to a command and reads and writes are done
with 'read -p' and 'print -p', respectively.
Seems to be pretty much the same thing, but with a slightly different
notation (|& vs. "coproc"). I think the original bash designers wanted to
be at least sort of "csh compatible", so they took |& from csh to mean
"merge stdout and stderr", so had to come up with something else for
coprocs.
Ah, I see. (Although "csh compatible" in scripting is, erm, somewhat
disturbing.)

Janis
Post by Kenny McCormack
[...]
Frank Winkler
2024-10-12 07:42:22 UTC
Permalink
Post by Janis Papanagnou
It depends on your shell. If you choose Kornshell you won't have that
issue and can write it as you've done with the output as you'd expect.
$ uname -a | read var
$ echo "$var"
Linux [...snip...]
The reason is that the last command in a pipeline will (in Kornshell)
be executed in the "current" shell context.
Interesting hint! I wasn't aware that there are such differences between
the shells. And indeed, some simple tests seem to work in an interactive
ksh.

Let's see the whole story. For historical reasons, I'm actually using
ksh for almost all scripts instead of bash for interactive use - not
knowing about the fact above.

There, I run command A which is producing output and which is calling
sub-command B, also producing output. This works fine.
What I want to achieve is to grab some parts of the output and store it
in a variable but without changing the output on the screen.

So I tried something like

tty=`tty`
A | tee $tty | ... | read var

"tee `tty`" inside the command fails, so I do it outside. The output of
A is still there but B's is gone (because B doesn't know anything about
the "tee"?) and the whole thing doesn't seem to be still working. $var
is empty, though this is a ksh script and the stuff behind "tee" is also
working.
To my understanding, the default B can be changed with an option but
when I set it to "B | tee $tty", there's still no output.

AFAIR, "var=`...`" works better but as the primary job is the command
itself and the variable is just a spin-off product, I'd prefer to do the
assignment at the end. I believe it looks better then ;) ...

Probably it would also be feasible with some temp files but I try to
avoid them wherever possible.

Happy week-end!

fw
Lawrence D'Oliveiro
2024-10-12 07:51:47 UTC
Permalink
AFAIR, "var=`...`" works ...
POSIX also allows

$(«cmd»)

as a nicer alternative to

`«cmd»`
Janis Papanagnou
2024-10-12 11:08:48 UTC
Permalink
Post by Frank Winkler
Post by Janis Papanagnou
It depends on your shell. If you choose Kornshell you won't have that
issue and can write it as you've done with the output as you'd expect.
$ uname -a | read var
$ echo "$var"
Linux [...snip...]
The reason is that the last command in a pipeline will (in Kornshell)
be executed in the "current" shell context.
Interesting hint! I wasn't aware that there are such differences between
the shells.
Yes, it's a subtle difference, but an important one; many shell users
wondered about the behavior of other shells, and you need to know the
internal information about subprocesses used in pipes to understand
why it doesn't work (in other shells), or why it does in ksh.
Post by Frank Winkler
And indeed, some simple tests seem to work in an interactive
ksh.
You can rely on it with any official ksh (since ksh88) and branches of
them. (But don't count on it if you're using some inferior ksh clone.)
Post by Frank Winkler
Let's see the whole story. For historical reasons, I'm actually using
ksh for almost all scripts instead of bash for interactive use - not
knowing about the fact above.
From your description below I'm not sure you still have a question or
whether you're just describing what you intended to do and are fine
with ksh's pipeline behavior. (Specifically I'm unsure about the 'B'
and what you mean by "sub-command B" in your example(s).)

If something is still not working for you, please clarify.
Post by Frank Winkler
There, I run command A which is producing output and which is calling
sub-command B, also producing output. This works fine.
What I want to achieve is to grab some parts of the output and store it
in a variable but without changing the output on the screen.
So I tried something like
tty=`tty`
A | tee $tty | ... | read var
$ man tty
tty - print the file name of the terminal connected to standard input

The 'tty' command in your 'tee' pipeline command has no tty attached;
it reads from the pipe. That's why A | tee $(tty) | ... | read var
doesn't work as you expected. You have to grab the tty information
outside the pipeline (as you've experienced).

Janis
Post by Frank Winkler
"tee `tty`" inside the command fails, so I do it outside. The output of
A is still there but B's is gone (because B doesn't know anything about
the "tee"?) and the whole thing doesn't seem to be still working. $var
is empty, though this is a ksh script and the stuff behind "tee" is also
working.
To my understanding, the default B can be changed with an option but
when I set it to "B | tee $tty", there's still no output.
AFAIR, "var=`...`" works better but as the primary job is the command
itself and the variable is just a spin-off product, I'd prefer to do the
assignment at the end. I believe it looks better then ;) ...
Probably it would also be feasible with some temp files but I try to
avoid them wherever possible.
Happy week-end!
fw
Lem Novantotto
2024-10-12 12:01:55 UTC
Permalink
(Specifically I'm unsure about the 'B' and what you mean by "sub-command
B" in your example(s).)
Waiting for his clarification, I'm thinking that he sees that something
like:

$ input=$(tty) && echo 123456789 | tee $input | grep -o 456 | \
tee $input | read myvar && echo "myvar is $myvar"; input= ; myvar=
123456789
456
myvar is

doesn't work, while of course:

$ input=$(tty) && read myvar < <( echo 123456789 | tee $input | \
grep -o 456 | tee $input ) && echo "myvar is $myvar"; input= ; myvar=
123456789
456
myvar is 456

does work (BTW: here I'm in Bash).
--
Bye, Lem
Frank Winkler
2024-10-12 15:57:16 UTC
Permalink
Post by Lem Novantotto
Waiting for his clarification, I'm thinking that he sees that something
$ input=$(tty) && echo 123456789 | tee $input | grep -o 456 | \
tee $input | read myvar && echo "myvar is $myvar"; input= ; myvar=
123456789
456
myvar is
doesn't work
I think you're right. But why doesn't it work in ksh?
Post by Lem Novantotto
$ input=$(tty) && read myvar < <( echo 123456789 | tee $input | \
grep -o 456 | tee $input ) && echo "myvar is $myvar"; input= ; myvar=
123456789
456
myvar is 456
does work (BTW: here I'm in Bash).
I'm still thinking about the difference between "< <(...)" and "<<< `...`"

And I would be happy with this one if there was a notation "the other
way round" ;) ... something like "... >>> $var"

Regards

fw
Lem Novantotto
2024-10-12 17:09:20 UTC
Permalink
Post by Frank Winkler
I think you're right. But why doesn't it work in ksh?
Uhm... It should work. Here it works, at least.

And sometimes something like that *may* work in bash, too, provided
we set:
$ shopt -s lastpipe

But my command doesn't work instead, in bash. That's why:

| lastpipe
| If set, and job control is not active, the shell runs the last command
| of a pipeline not executed in the background in the current shell
| environment.
--
Bye, Lem
Kenny McCormack
2024-10-19 11:47:16 UTC
Permalink
In article <veeag0$786f$***@dont-email.me>,
Lem Novantotto <***@none.invalid> wrote:
...
Post by Lem Novantotto
| lastpipe
| If set, and job control is not active, the shell runs the last command
| of a pipeline not executed in the background in the current shell
| environment.
The key phrase here is "job control is not active". AFAICT, "lastpipe" (bash)
works in a script, but not interactively.
--
You know politics has really been turned upside down when you have someone in the
government with a last name of Cheney (Liz, Senator from Wyoming) who is the voice of
reason.
Lem Novantotto
2024-10-19 12:53:42 UTC
Permalink
Post by Kenny McCormack
AFAICT, "lastpipe" (bash)
works in a script, but not interactively.
Uhm... I'm sorry, probably I do not get your point. It's a bash
shell option: you set it (or unset it), and then you go.

| $ set -m ; shopt -u lastpipe ; unset my
| $ echo ciao |read my ; echo $my
|
| $ set +m ; shopt -s lastpipe ; unset my
| $ echo ciao |read my ; echo $my
| ciao
--
Bye, Lem
Lawrence D'Oliveiro
2024-10-12 21:32:33 UTC
Permalink
Post by Frank Winkler
I'm still thinking about the difference between "< <(...)" and "<<< `...`"
Not sure about the extra “<”, but “<(«cmd»)” gets substituted with the
name of a file (i.e. not stdin) that the process can open and read to get
the output of «cmd». Similarly “>(«cmd»)” gets substituted with the name
of a file (i.e. not stdout) that the process can open and write to feed
input to waiting «cmd».

“<<” and “<<<”, on the other hand, are for specifying alternative sources
of inline data for stdin, or you can use “«fd»<<” and “«fd»<<<” to specify
an alternative «fd» that the process will expect to find already open for
reading, to obtain that data.
Janis Papanagnou
2024-10-12 21:47:34 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Frank Winkler
I'm still thinking about the difference between "< <(...)" and "<<< `...`"
Not sure about the extra “<”,
'<(...)' executes the command indicated by '...' and provides a file
descriptor, something like '/dev/fd/5', which (being effectively a
filename) can be redirected to the 'read' command.

The shell's 'read' command doesn't read from files but from stdin,
so if 'read's input is in a file or is the output of a command (as in
this case) you have to do a redirection; for the first case
read < file
and in case of a process substitution (a file named like /dev/fd/5)
read < <(some command)
which sort of "expands" to something like
read < /dev/fd/5

So it's no "extra" '<', it's just a necessary redirection for a
command that reads from 'stdin' but not from files.

Janis
Post by Lawrence D'Oliveiro
but “<(«cmd»)” gets substituted with the
name of a file (i.e. not stdin) that the process can open and read to get
the output of «cmd». Similarly “>(«cmd»)” gets substituted with the name
of a file (i.e. not stdout) that the process can open and write to feed
input to waiting «cmd».
[...]
Lawrence D'Oliveiro
2024-10-12 21:50:37 UTC
Permalink
Post by Janis Papanagnou
'<(...)' executes the command indicated by '...' and provides a file
descriptor, something like '/dev/fd/5', which (being effectively a
filename) can be redirected to the 'read' command.
It’s an actual file name. The process treats it as just another filename
argument. The fact that it encodes a file descriptor is something OS-
specific, that only code that wants to create such names (like the shell)
has to worry about.
Post by Janis Papanagnou
The shell's 'read' command doesn't read from files but from stdin
It can read from any file currently open for reading.
Post by Janis Papanagnou
So it's no "extra" '<' ...
Ah, I see. Instead of “< <(«cmd›)”, you could have just written
“0<(«cmd»)”.
Janis Papanagnou
2024-10-12 21:56:33 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Janis Papanagnou
'<(...)' executes the command indicated by '...' and provides a file
descriptor, something like '/dev/fd/5', which (being effectively a
filename) can be redirected to the 'read' command.
It’s an actual file name. The process treats it as just another filename
argument.
Exactly. (That's what I was saying.) '/dev/fd/5' acts like a file/path
name (but only on systems that support /dev/fd mechanism).
Post by Lawrence D'Oliveiro
The fact that it encodes a file descriptor is something OS-
specific, that only code that wants to create such names (like the shell)
has to worry about.
Post by Janis Papanagnou
The shell's 'read' command doesn't read from files but from stdin
It can read from any file currently open for reading.
Post by Janis Papanagnou
So it's no "extra" '<' ...
Ah, I see. Instead of “< <(«cmd›)”, you could have just written
“0<(«cmd»)”.
No.

(1388)$ read <(cat hello_world)
ksh: read: /dev/fd/3: invalid variable name
(1389)$ read < <(cat hello_world)
(1390)$ echo $REPLY
Hello world!
(1391)$ read 0<(cat hello_world)
ksh: read: /dev/fd/3: invalid variable name

Janis
Lem Novantotto
2024-10-12 23:07:41 UTC
Permalink
I'm still thinking about the difference between "< <(...)" and "<<<`...`"
As already pointed out by Lawrence:

1) "command2 <<<`command1`": `command2`, equal to the preferable[*]
$(command2), can be seen as the output of command2: a string. So with
<<< you are telling the shell to take this string as input on the
standard input of command2. So: execute command1, then take its output
as the input of command2. This is called *command* substitution:
substitution of a command with its output.

[*] No characters are special between parenthesis: easier.

2) "command2 < <(process_of_command1)": here process, which is the
"active running instance" of command1 - enough: remove the difference,
think of command1 and stop - is run with its output connected to a
named FIFO pipe file (or to some file in /dev/fd).
So <(process_of_command1) is expanded as the name of this file.
The first < is simple input redirection.
So: execute command1, connect the output of its process to a special
file, redirect this special file to input of command2.
This is called *process* output substitution: substitute a
process with the name of its output file.
Process substitution does work only on systems supporting named pipes
or /dev/fd/... so it's less universal than command substitution.

Some practical differences. For example, try:

$ cat <<<$(while true; do echo yes; done)

Nothing: cat is waiting for the other stuff to end. But it won't end,
in this case! Gawsh!

$ cat < <( while true; do echo yes; done)

Rock'n'roll!
And I would be happy with this one if there was a notation "the other
way round" ... something like "... >>> $var"
You're more the straightforward type of guy, eh! That's fine! ;)
But, sorry, '>>>' is still to come. In bash, at least. :-)
--
Bye, Lem
Lem Novantotto
2024-10-12 23:10:06 UTC
Permalink
Post by Lem Novantotto
1) "command2 <<<`command1`": `command2`, equal to the preferable[*]
$(command2),
Sorry:

1) "command2 <<<`command1`": `command1`, equal to the preferable[*]
$(command1)
--
Bye, Lem
Lem Novantotto
2024-10-12 23:39:23 UTC
Permalink
And by Janis, of course!
I apologize.
--
Bye, Lem
Frank Winkler
2024-10-15 11:46:42 UTC
Permalink
Post by Frank Winkler
I think you're right. But why doesn't it work in ksh?
BTW: in an interactive ksh, the example does work:

$ input=$(tty) && echo 123456789 | tee $input | grep -o 456 | tee $input
| read myvar && echo "myvar is $myvar"; input= ; myvar=
123456789
456
myvar is 456
$

And it also does when I put this into a ksh script.

The thing in question does

tty=`tty`
sudo openconnect -b ... |\
tee $tty | grep "^Session authentication will expire at" |\
cut -d ' ' -f 7- | read end

and this completely fails. Terminal output is missing, $end is empty and
the whole command doesn't seem to work. Without the last two lines, it's
working perfectly.
"sudo" also doesn't seem to be the problem as simple tests are working.

Regards

fw
Frank Winkler
2024-10-24 09:30:34 UTC
Permalink
Post by Frank Winkler
The thing in question does
tty=`tty`
sudo openconnect -b ... |\
  tee $tty | grep "^Session authentication will expire at" |\
  cut -d ' ' -f 7- | read end
and this completely fails. Terminal output is missing, $end is empty and
the whole command doesn't seem to work. Without the last two lines, it's
working perfectly.
"sudo" also doesn't seem to be the problem as simple tests are working.
After some more testing with just a "tee" into a file, it looks like the
pipe starts OC and writes its output (including the part I want to
"grep" in to the file but then hangs. Everything following this line in
the script doesn't seem to be done, anyway. Maybe some effect of the
background option?

Regards

fw
Kenny McCormack
2024-10-24 11:21:27 UTC
Permalink
Post by Frank Winkler
The thing in question does
tty=`tty`
sudo openconnect -b ... |\
tee $tty | grep "^Session authentication will expire at" |\
cut -d ' ' -f 7- | read end
and this completely fails. Terminal output is missing, $end is empty and
the whole command doesn't seem to work. Without the last two lines, it's
working perfectly.
"sudo" also doesn't seem to be the problem as simple tests are working.
OK, let's try to normalize this. But first, please tell which shell you
are targeting. I.e., how much old-compatibility do you need?

I'm going to assume bash, but there isn't much difference. In particular,
note that $() is POSIX, so you really don't need to ever mess with ``.

Also note: You do not need \ at the end of the line if the line ends with |
(also true for lines that end with || or && - and possibly others)

Anyway, this should do it:

tty=$(tty)
end="$(sudo openconnect -b ... |
tee $tty | grep "^Session authentication will expire at" |
cut -d ' ' -f 7-)"

Note also that the grep and the cut could be merged into a single
invocation of awk - but I don't know enough about what you're doing to be
more specific. You can probably also eliminate all of the "tty" stuff by
writing either to /dev/tty (which is generic Unix) or /dev/stderr (which
might be Linux-specific). Also, if you use gawk (GNU awk), then you can
write to /dev/stderr from within AWK and eliminate all of the "tee" stuff
as well.
--
"Remember when teachers, public employees, Planned Parenthood, NPR and PBS
crashed the stock market, wiped out half of our 401Ks, took trillions in
TARP money, spilled oil in the Gulf of Mexico, gave themselves billions in
bonuses, and paid no taxes? Yeah, me neither."
Frank Winkler
2024-10-24 21:43:59 UTC
Permalink
Post by Kenny McCormack
I'm going to assume bash, but there isn't much difference. In
particular,
Post by Kenny McCormack
note that $() is POSIX, so you really don't need to ever mess with ``.
I know that "$()" is POSIX but I don't feel "``" as a mess but in fact,
I like it much better. And we're talking about ksh.
Post by Kenny McCormack
Also note: You do not need \ at the end of the line if the line ends with |
(also true for lines that end with || or && - and possibly others)
tty=$(tty)
end="$(sudo openconnect -b ... |
tee $tty | grep "^Session authentication will expire at" |
cut -d ' ' -f 7-)"
I also know that the "``" thing should and does do it but I explicitly
asked why the "read" approach fails and that I'd prefer a solution with
the assignment at the end.

You're absolutely right that there's probably a more elgant and cooler
way instead of "grep" and "cut" (maybe with "awk") but that was a quick
one and the details can be optimized when things are working. But thanks
for the ideas.

Regards

fw
Janis Papanagnou
2024-10-25 04:57:12 UTC
Permalink
Post by Kenny McCormack
Post by Kenny McCormack
I'm going to assume bash, but there isn't much difference. In
particular,
Post by Kenny McCormack
note that $() is POSIX, so you really don't need to ever mess with ``.
I know that "$()" is POSIX but I don't feel "``" as a mess but in fact,
I like it much better. And we're talking about ksh.
[...]
I also know that the "``" thing should and does do it but I explicitly
asked why the "read" approach fails and that I'd prefer a solution with
the assignment at the end.
The posts have indicated that folks here were trying to clarify
what you [intended to] do. Kenny was suggesting some basic things
that you should indeed consider adapting to make your programs
clearer and less error-prone. Of course you can do what you like.
But you should really consider the "new" (i.e. 1988) syntax for
command substitution that Ksh had invented for documented reasons.

Janis
Post by Kenny McCormack
[...]
Lem Novantotto
2024-10-24 22:45:39 UTC
Permalink
Maybe some effect of the background option?
I guess it could be so.

However, since I don't use openconnect I could only try something like:

| $ openconnect -b unresponsive:local:ip:address | tee | grep .
| POST https://192.168.38.38/
| Failed to connect to host 192.168.38.38
| Failed to open HTTPS connection to 192.168.38.38
| Failed to complete authentication
| Failed to connect to 192.168.38.38:443: Nessun instradamento per l'host

which works, and in bash gives me the stderr printed by openconnect - the
three lines in the middle - and the stdout (piped from openconnect to tee
and then from tee to grep) printed in red by grep - first and last line).
In ksh no red color, but it works the same.

Then, in standard bash:

|$ openconnect -b 192.168.38.38 2>&1 | tee | grep -o POST |read end ; \
|> echo $end
|

whilst in ksh:

|$ openconnect -b 192.168.38.38 2>&1 | tee | grep -o POST |read end ; \
|> echo $end
POST

Again, it works as expected. So I didn't come to anything useful. Sorry.
--
Bye, Lem
Kenny McCormack
2024-10-25 07:10:35 UTC
Permalink
In article <vfeimj$2qggf$***@dont-email.me>,
Lem Novantotto <***@none.invalid> wrote:
...
Post by Lem Novantotto
| $ openconnect -b unresponsive:local:ip:address | tee | grep .
Note that "tee" when run with no args is a no-op.
It is, in fact, equivalent to running "cat" (with no args).
--
I am not a troll.
Rick C. Hodgin
I am not a crook.
Rick M. Nixon
Lem Novantotto
2024-10-25 08:57:29 UTC
Permalink
Post by Kenny McCormack
Note that "tee" when run with no args is a no-op.
Of course. :)
--
Bye, Lem
Frank Winkler
2024-10-12 15:49:55 UTC
Permalink
Post by Janis Papanagnou
From your description below I'm not sure you still have a question or
whether you're just describing what you intended to do and are fine
with ksh's pipeline behavior. (Specifically I'm unsure about the 'B'
and what you mean by "sub-command B" in your example(s).)
A spawns B and grabbing the desired information from the common output
still doesn't work.
Post by Janis Papanagnou
The 'tty' command in your 'tee' pipeline command has no tty attached;
it reads from the pipe. That's why A | tee $(tty) | ... | read var
doesn't work as you expected. You have to grab the tty information
outside the pipeline (as you've experienced).
That was also my understanding after the experience ;) ... but thanks
for confirmation.

Regards

fw
Kenny McCormack
2024-10-19 11:50:07 UTC
Permalink
In article <vecl6n$d0r$***@dont-email.me>,
Janis Papanagnou <janis_papanagnou+***@hotmail.com> wrote:
...
Post by Janis Papanagnou
(In other shells you have to work around the issue as demonstrated in
other answers to your post. Some workaround are more clumsy some less.
Setting "lastpipe" works (sort of) in bash.
Post by Janis Papanagnou
A shorter variant of the here-document posted elsethread can be using
here-strings
$ read var <<< $(uname -a)
another method is using process substitution and redirection
$ read var < <(uname -a)
Both supported by shells like ksh, bash, zsh, but non-standard as are
some other workaround proposals that use bash-specifics like 'coproc',
that doesn't work as widely as using '<<<' or '<(...)' do.)
There are lots of workarounds, but I think the main takeaway is that the
obvious-but-wrong idiom of "cmd | read foo" is just TBA.
--
Life's big questions are big in the sense that they are momentous. However, contrary to
appearances, they are not big in the sense of being unanswerable. It is only that the answers
are generally unpalatable. There is no great mystery, but there is plenty of horror.
(https://en.wikiquote.org/wiki/David_Benatar)
Helmut Waitzmann
2024-10-11 20:20:26 UTC
Permalink
Post by Frank Winkler
Hi there !
$ var1=`uname -sr`
$ echo $var1
Darwin 24.0.0
$ read var2 <<< `uname -sr`
$ echo $var2
Darwin 24.0.0
$ uname -sr | read var3
$ echo $var3
$ uname -sr | read -p var3
$ echo $var3
$
While the first two ones behave like expected, I wonder why the
latter ones fail. What's the difference behind the scenes?
And even more confusing, why does this familiar one work anyway?
$ sw_vers | while read line; do echo $line; done
ProductName: macOS
ProductVersion: 15.0.1
BuildVersion: 24A348
$
I've been using commands like that one for a very long time and
that's why I tried the simple "read" above - with no success.
How can I do such an assignment at the end of a command instead
of the beginning? Any ideas?
uname -sr | { read var3 ; echo $var3 ; }
Frank Winkler
2024-10-11 20:50:10 UTC
Permalink
Post by Helmut Waitzmann
uname -sr | { read var3 ; echo $var3 ; }
That's exactly how I just wanted to confirm John-Paul's answer ...

$ uname -sr | ( read var3; echo $var3 )
Darwin 24.0.0
$

... but it still doesn't solve the issue that I need the result to be
visible in the parent shell.

Regards

fw
Lawrence D'Oliveiro
2024-10-11 21:03:07 UTC
Permalink
Post by Frank Winkler
... but it still doesn't solve the issue that I need the result to be
visible in the parent shell.
coproc { uname -sr; }
read -u ${COPROC[0]} var3
wait $COPROC_PID
echo $var3
Kenny McCormack
2024-10-19 11:45:30 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Frank Winkler
... but it still doesn't solve the issue that I need the result to be
visible in the parent shell.
coproc { uname -sr; }
read -u ${COPROC[0]} var3
wait $COPROC_PID
echo $var3
I'm actually a fan of "coproc" in bash, and I use it in my scripting, but I
think it is overkill in most cases. The most general command for variable
assignment (in bash) is "mapfile". "mapfile" supercedes "coproc" in most
cases. For the possible benefit of OP, here's the standard idiom for using
"mapfile", using the "sw_vers" program, which OP mentioned in passing
(AFAIK, "sw_vers" is a Mac OS thing):

mapfile -t < <(sw_vers)

which populates the array MAPFILE.

A couple of other points:
1) When using "coproc", you can get away with just $COPROC for the
output of the co-process. This is a little easier to type than
${COPROC[0]} - even if this does get flagged as a warning by the
"shellcheck" program. Note that in bash, all variables are arrays;
it's just that most only have one element:
% bash -c 'echo ${HOME[0]}'
/home/me
%

2) OP's main concern actually seems to be aesthetic. He just wants the
variable name at the end of the line instead of at the beginning. Kind
of like the difference between the two styles of assembler languages,
where some are: "move src,dst" and others (most) are "move dst,src".
(It's been a long time since I've done assembler language.)

Finally, note that you just generally learn to avoid the (wrong) idiom of:

cmd | read bar

because you learn early on that it doesn't work. I think the most basic
(works in any sh-like shell, even in the bad old days of Solaris)
alternative is:

read bar << EOF
$(cmd)
EOF
--
https://www.rollingstone.com/politics/politics-news/the-10-dumbest-things-ever-said-about-global-warming-200530/

RS contributor Bill McKibben lambasted this analysis in his 2007 book, Deep Economy.
It's nice to have microelectronics; it's necessary to have lunch, wrote McKibben.
Janis Papanagnou
2024-10-19 12:52:01 UTC
Permalink
Post by Kenny McCormack
Post by Lawrence D'Oliveiro
Post by Frank Winkler
... but it still doesn't solve the issue that I need the result to be
visible in the parent shell.
coproc { uname -sr; }
read -u ${COPROC[0]} var3
wait $COPROC_PID
echo $var3
I'm actually a fan of "coproc" in bash, and I use it in my scripting, but I
think it is overkill in most cases. [...]
Also, if above code is how to use co-processes in Bash, I consider
that extremely clumsy (if compared to, say, Ksh).

(Mileages may vary, of course.)

Janis
Kenny McCormack
2024-10-19 13:35:59 UTC
Permalink
Post by Janis Papanagnou
Post by Kenny McCormack
Post by Lawrence D'Oliveiro
Post by Frank Winkler
... but it still doesn't solve the issue that I need the result to be
visible in the parent shell.
coproc { uname -sr; }
read -u ${COPROC[0]} var3
wait $COPROC_PID
echo $var3
I'm actually a fan of "coproc" in bash, and I use it in my scripting, but I
think it is overkill in most cases. [...]
Also, if above code is how to use co-processes in Bash, I consider
that extremely clumsy (if compared to, say, Ksh).
(Mileages may vary, of course.)
I think he was being intentionally verbose for pedagogic purposes.

I won't bore you with the details, but obviously a lot of the text in the
quoted 4 lines is unnecessary in practice.

Just out of curiosity, how would you (Janis) do this in ksh?
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/ForFoxViewers
Janis Papanagnou
2024-10-19 13:54:44 UTC
Permalink
Post by Kenny McCormack
Post by Janis Papanagnou
Post by Kenny McCormack
Post by Lawrence D'Oliveiro
Post by Frank Winkler
... but it still doesn't solve the issue that I need the result to be
visible in the parent shell.
coproc { uname -sr; }
read -u ${COPROC[0]} var3
wait $COPROC_PID
echo $var3
I'm actually a fan of "coproc" in bash, and I use it in my scripting, but I
think it is overkill in most cases. [...]
Also, if above code is how to use co-processes in Bash, I consider
that extremely clumsy (if compared to, say, Ksh).
(Mileages may vary, of course.)
I think he was being intentionally verbose for pedagogic purposes.
I won't bore you with the details, but obviously a lot of the text in the
quoted 4 lines is unnecessary in practice.
Just out of curiosity, how would you (Janis) do this in ksh?
For the question on topic I wouldn't (as you wouldn't, IIUC) use
co-processes in the first place - even if [in ksh] we don't need
file descriptor numbers from arrays (like in the bash sample).

I'd use one of the one-liner solutions if I hadn't the "lastpipe"
functionality built-in or available. It also makes no sense, IMO,
to use co-processes that just read a simple value from a command.

Co-processes I have to use only rarely, and the applications are
from commands that provide some "service"; I send a request, and
then I retrieve the response (and rinse repeat, as they say).

The syntax for the [unnecessary] co-process application depicted
above would in Ksh be

uname -sr |&
read -p var
echo "$var"


Janis
Janis Papanagnou
2024-10-19 14:11:44 UTC
Permalink
Post by Janis Papanagnou
Post by Lawrence D'Oliveiro
coproc { uname -sr; }
read -u ${COPROC[0]} var3
wait $COPROC_PID
echo $var3
The syntax for the [unnecessary] co-process application depicted
above would in Ksh be
uname -sr |&
read -p var
echo "$var"
Concerning the syntax (differences, and generally) I want to add
that it's worthwhile to compare the Ksh syntax with the approach
that we would typically take [in Ksh] to solve the task

uname -sr | read var
echo "$var"

We see that the co-process syntax is a straightforward variant of
the ordinary piping. (As opposed to bash that introduces a lot of
[bulky] stuff that's not even resembling in any way the read-pipe.)


For the interested folks let me hijack my post to add that it's
possible to redirect the co-processes to other file descriptors
(using <&p and >&p with appropriate file descriptor numbers) so
that multiple co-processes can be simultaneously used.

Janis
Kenny McCormack
2024-10-19 14:52:52 UTC
Permalink
Post by Janis Papanagnou
Post by Kenny McCormack
Post by Janis Papanagnou
Post by Kenny McCormack
Post by Lawrence D'Oliveiro
Post by Frank Winkler
... but it still doesn't solve the issue that I need the result to be
visible in the parent shell.
coproc { uname -sr; }
read -u ${COPROC[0]} var3
wait $COPROC_PID
echo $var3
I'm actually a fan of "coproc" in bash, and I use it in my scripting,
but I think it is overkill in most cases. [...]
Also, if above code is how to use co-processes in Bash, I consider
that extremely clumsy (if compared to, say, Ksh).
(Mileages may vary, of course.)
I think he was being intentionally verbose for pedagogic purposes.
I won't bore you with the details, but obviously a lot of the text in the
quoted 4 lines is unnecessary in practice.
Just out of curiosity, how would you (Janis) do this in ksh?
For the question on topic I wouldn't (as you wouldn't, IIUC) use
co-processes in the first place - even if [in ksh] we don't need
file descriptor numbers from arrays (like in the bash sample).
Agreed.
Post by Janis Papanagnou
I'd use one of the one-liner solutions if I hadn't the "lastpipe"
functionality built-in or available. It also makes no sense, IMO,
to use co-processes that just read a simple value from a command.
Agreed.
Post by Janis Papanagnou
Co-processes I have to use only rarely, and the applications are
from commands that provide some "service"; I send a request, and
then I retrieve the response (and rinse repeat, as they say).
Agreed.
Post by Janis Papanagnou
The syntax for the [unnecessary] co-process application depicted
above would in Ksh be
uname -sr |&
read -p var
echo "$var"
Which is pretty much the same as in bash, which would be:

coproc { uname -sr; }
read -u$COPROC
echo "$REPLY"

Note that bash can be compiled to support multiple concurrent coprocs. (*)
Thus, it makes sense to have to explicitly specify the fd to read or write,
rather than (AIUI), in ksh where there is just "the coproc".

The multi-coproc feature is off by default (in bash), but can be turned on
by setting an option in one of the config.h files and re-compiling bash.
It is considered "experimental", but seems to work OK as far as I can tell.

(*) You do this by giving the 2nd and subsequent coprocs names and then use
that name instead of the default "COPROC" to access the I/O fds.
--
"The party of Lincoln has become the party of John Wilkes Booth."

- Carlos Alazraqui -
Lawrence D'Oliveiro
2024-10-19 21:42:26 UTC
Permalink
Also, if above code is how to use co-processes in Bash, I consider that
extremely clumsy (if compared to, say, Ksh).
Bash allows for named coprocs. That means you can have multiple coprocs
going at once.
Kenny McCormack
2024-10-19 22:56:53 UTC
Permalink
Post by Lawrence D'Oliveiro
Also, if above code is how to use co-processes in Bash, I consider that
extremely clumsy (if compared to, say, Ksh).
"extremely" seems more than a bit over the top. Maybe somewhat clumsy, but
hardly "extremely".
Post by Lawrence D'Oliveiro
Bash allows for named coprocs. That means you can have multiple coprocs
going at once.
Yes, but only if you re-compile your own version of bash, with that option
turned on in one of the config.h files.

To be precise, multiple co-procs will seem to work even if not enabled as
described above, but things then start to go awry in mysterious ways. I've
experienced exactly this until I did the research into how to properly
enable it.

But, yes, that's part of the point of making it possible to assign a name
to a coproc (instead of just taking the default of "coproc").
--
If Jeb is Charlie Brown kicking a football-pulled-away, Mitt is a '50s
housewife with a black eye who insists to her friends the roast wasn't
dry.
Janis Papanagnou
2024-10-20 05:09:15 UTC
Permalink
Post by Kenny McCormack
Post by Lawrence D'Oliveiro
Also, if above code is how to use co-processes in Bash, I consider that
extremely clumsy (if compared to, say, Ksh).
"extremely" seems more than a bit over the top. Maybe somewhat clumsy, but
hardly "extremely".
I don't think it makes much sense to discuss subjective valuations.
But okay.

Mine stems from a "per-feature level" (and is no absolute). It was
based on LDO's suggestion (which was a bit more complex than your
version) that used; (1) a [new] keyword, (2) command grouping (not
sure it's necessary[*] in the first place?), (3) using an explicit
descriptor through (4) a variable, (5) using an array instead of a
scalar element, and (6) using 'wait' (I'm also not sure this is
necessary in the first place or just the poster not knowing better?).

For a simple feature that's really a lot compared to e.g. ksh's way.
(YMMV. But if "somewhat clumsy" triggers less emotions then I'm fine
with that.)

I also think that syntaxes that resemble existing constructs serves
"simplicity". Like using '|&', which resembles both, the pipelining
communication part '|', and the asynchronicity of the call '&'. It's
also visible (as already shown elsethread) in the similarity of the
calling syntax contexts; compare
uname -sr | read
versus
uname -sr |& read -p

(Yes, '|' is different than '|&', which is more like '&' since it
separates commands where the pipe connects them. But that was not
the point here.)

Of course per bash's defaults even the simple 'uname -sr | read' has
to be written (for example) as 'uname -sr | { read ; echo $REPLY ;}'
i.e. with the spurious braces in case you want to access the value,
so the the added 'coproc { ;}' complexity in Bash might not look too
bad to Bash users, where Ksh users (like me) will probably value it
differently.

(It's also noteworthy that a common tool like GNU Awk also uses the
same token for its co-process feature which serves comprehensibility
and makes it "simple" to use on another "level of consideration".)
Post by Kenny McCormack
Post by Lawrence D'Oliveiro
Bash allows for named coprocs. That means you can have multiple coprocs
going at once.
Note, that's possible in Ksh as well. Ksh's design decision is that
the common case (using one co-process) is "extremely" simple to use
and doesn't add unnecessary complexity or rises questions on details.

(And using multiple co-processes isn't difficult either with using
the known shell's redirections concept.)
Post by Kenny McCormack
[...]
But, yes, that's part of the point of making it possible to assign a name
to a coproc (instead of just taking the default of "coproc").
(Given how [supposedly] rare this feature is used, and that we were
speaking about subjective impressions, we have to assess that this
post got far too long. And probably triggers more dispute. Well...)

Janis

[*] Would it be possible to write a simplified form like
'coproc uname -sr;'
or is it syntactically necessary to write
'coproc { uname -sr; }'
?
Kenny McCormack
2024-10-20 09:12:36 UTC
Permalink
Post by Janis Papanagnou
Post by Kenny McCormack
Also, if above code is how to use co-processes in Bash, I consider that
extremely clumsy (if compared to, say, Ksh).
"extremely" seems more than a bit over the top. Maybe somewhat clumsy, but
hardly "extremely".
I don't think it makes much sense to discuss subjective valuations.
But okay.
It has to do with certain habits/conventions that Usenet posters have
fallen into. One sees this frequently, where words like "very" and
"extremely" are dropped into one's phrasing for no particularly good
reason. You (Janis) are hardly alone in this habit, but it is something
that has frequently annoyed me about Usenet parlance.

There is a story on this subject (having to do with the journalistic
over-use of the word "very"), which I frequently quote in real life
conversations, attributed to Mark Twain. I may yet end up telling that
story on this thread, but not right now.
Post by Janis Papanagnou
Mine stems from a "per-feature level" (and is no absolute). It was
based on LDO's suggestion (which was a bit more complex than your
version) that used; (1) a [new] keyword, (2) command grouping (not
sure it's necessary[*] in the first place?), (3) using an explicit
descriptor through (4) a variable, (5) using an array instead of a
scalar element, and (6) using 'wait' (I'm also not sure this is
necessary in the first place or just the poster not knowing better?).
My opinion, which I have stated consistently in this thread, is that it
doesn't add up to much difference - hence my criticism of your use of the
word "extremely". Addressing each of your points:
1) So what? |& vs. coproc. Who cares? I explained in another post
why (IMHO) they did it this way. As both me and LDO have noted, bash
supports multiple concurrent coprocs, so doing it this way is
necessary.

2) It is necessary. The syntax is admittedly a bit weird and not well
documented. If the thing you are launching as a coproc is anything
other than a single word command, then it has to be enclosed in {}.
For quite a while, I didn't know this, so as a workaround, I'd write
a function (say: foo()) then do: coproc foo

3&4&5) Necessary because bash supports multiple concurrent coprocs.
Also, as I've noted, you don't actually have to use array notation to
get the "read output from the coproc" fd.

6) Not necessary, and I've never used it in my code. As I have
mentioned in another post, I think LDO was being intentionally
pedantically complete in his example.
Post by Janis Papanagnou
For a simple feature that's really a lot compared to e.g. ksh's way.
(YMMV. But if "somewhat clumsy" triggers less emotions then I'm fine
with that.)
I think there is no significant difference at all (*). See above.

(*) Other than that bash does support multiple concurrent coprocs (if you
are willing to recompile bash).

...
Post by Janis Papanagnou
(Yes, '|' is different than '|&', which is more like '&' since it
separates commands where the pipe connects them. But that was not
the point here.)
As noted elsethread, bash took |& from csh (*). So, they had to come up
with something else for coprocs.

(*) I don't think this was a particularly bright move on their part, BTW.
I never use it; I always use the more normal "2>&1" syntax.

...

No comment on the rest, other than to say that you seem to claim that ksh
does support multiple concurrent coprocs, which I think is wrong, but I
think we may not be talking about the same thing (so probably not much
point in continuing in that vein).
--
Conservatives want smaller government for the same reason criminals want fewer cops.
Janis Papanagnou
2024-10-25 04:26:14 UTC
Permalink
Post by Kenny McCormack
Post by Janis Papanagnou
I don't think it makes much sense to discuss subjective valuations.
But okay.
It has to do with certain habits/conventions that Usenet posters have
fallen into. [...]
Okay.
Post by Kenny McCormack
[...]
No comment on the rest, other than to say that you seem to claim that ksh
does support multiple concurrent coprocs, which I think is wrong, [...]
I meant that you can have several asynchroneous processes started
which are each connected to the same main shell session with pipes
for communicating. For that you have to redirect the default pipe
channels because there's of course just one option '-p' with the
commands 'read' and 'print' and you need some way to differentiate
the various channels. I just hacked a sample to show what I mean...

bc |&
exec 3<&p 4>&p
bc -l |&
exec 5<&p 6>&p

while IFS= read -r line
do
case ${line} in
(*[.]*)
print -u6 "$line"
read -u5 res
print "Res(R): $res"
;;
(*)
print -u4 "$line"
read -u3 res
print "Res(I): $res"
;;
esac
done

(The second 'exec' isn't necessary for the last started coprocess
since you can use options '-p' as with a single coprocess instead
of the -u5 and -u6, but you see that there's in principle arbitrary
numbers of coprocesses possible.)

Janis
Lawrence D'Oliveiro
2024-10-25 04:35:49 UTC
Permalink
I meant that you can have several asynchroneous processes started which
are each connected to the same main shell session with pipes for
communicating. For that you have to redirect the default pipe channels
because there's of course just one option '-p' with the commands 'read'
and 'print' and you need some way to differentiate the various channels.
Bash does that in a nicer way.
Janis Papanagnou
2024-10-25 05:03:07 UTC
Permalink
Post by Lawrence D'Oliveiro
I meant that you can have several asynchroneous processes started which
are each connected to the same main shell session with pipes for
communicating. For that you have to redirect the default pipe channels
because there's of course just one option '-p' with the commands 'read'
and 'print' and you need some way to differentiate the various channels.
Bash does that in a nicer way.
For multiple co-processes you may be right. (I certainly differ
given how Bash implemented it, with all the question that arise.)
And I already said: I don't think it makes much sense to discuss
subjective valuations.

But my response was anyway just countering the (wrong) opinion
that it would not be possible in Ksh.

There's no more to be said on my part.

Janis
Kenny McCormack
2024-10-25 08:10:11 UTC
Permalink
In article <vff8qc$31tk9$***@dont-email.me>,
Janis Papanagnou <janis_papanagnou+***@hotmail.com> wrote:
...
Post by Janis Papanagnou
For multiple co-processes you may be right. (I certainly differ
given how Bash implemented it, with all the question that arise.)
And I already said: I don't think it makes much sense to discuss
subjective valuations.
Our opinions are all we have. I can't see how that can be "off topic".

It was you who first brought up your personal opinion on the subject
(comparing the ksh implementation of coprocs with the bash implementation).
To be clear, there is nothing wrong with that; we are here to exchange
opinions.

I really do think that there's no significant difference in verbosity
between the two implementations (certainly in the simple case). The ksh
way of handling multiples looks kludgey to me (you may think otherwise, of
course). It certainly looks to me that the bash way was designed (no doubt
benefiting from ksh having paved the way), whereas the ksh way "just grew".

And, finally, yes, it is odd that the bash way was designed to support
multi-coproc yet multi-coproc doesn't work "out of the box" (I've described
elsethread what you have to do to get it to work). Maybe this situation
has changed in the years since I last researched it.
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/Aspergers
Janis Papanagnou
2024-10-25 09:32:02 UTC
Permalink
Post by Kenny McCormack
...
Post by Janis Papanagnou
For multiple co-processes you may be right. (I certainly differ
given how Bash implemented it, with all the question that arise.)
And I already said: I don't think it makes much sense to discuss
subjective valuations.
Our opinions are all we have. I can't see how that can be "off topic".
Oh, don't get me wrong; to each his own [opinion]. That's fine.

It's certainly also not off-topic, but it's just opinion. Though the
other poster also didn't provide any examples or rationales for his
opinion - which is not surprising if you know him - where I tried to
explain my POV; whether you support it or disagree to.

I can extend on what I already hinted upthread...
Post by Kenny McCormack
[...]
I really do think that there's no significant difference in verbosity
between the two implementations (certainly in the simple case).
Verbosity was not my point. (Only that I was repelled by the other
poster's, as far as I understand, unnecessary ballast in his code.)

But clearness or fitting into existing shell concepts do matter, IMO.
Post by Kenny McCormack
The ksh
way of handling multiples looks kludgey to me (you may think otherwise, of
course). It certainly looks to me that the bash way was designed (no doubt
benefiting from ksh having paved the way), whereas the ksh way "just grew".
Well, the Bash way looks quite "hacky" in my opinion. But maybe you
could explain what I might have missed. The questions I'd have are
(for example); [from the bash man page] in

coproc [NAME] command [redirections]

what is the 'coproc' actually (beyond a "reserved word")? Is it a
shell construct like, say, a 'case' construct, or a command, like,
say, 'pty' or 'time'? Then, depending on that; is the redirection
part of a special case here? And that's the reason why it's listed
explicitly? Note redirection is an orthogonal concept! Here too?
The access to the FDs is implicitly defined by 'COPROC[0]' for
"output" to the process and 'COPROC[1]' for input to the process;
is this coherent with 'stdin'(0) and 'stdout'(1); this at least
irritates me, it's not as obvious as it could be.

In Ksh I have the simple case that you can simply use

command |&

print -p
read -p

Easy to use, clear, no ballast, no questions [to me].

If you want to redirect it with explicit FD control you use Ksh's

exec 3<&p 4>&p

(and then 'read -u3' and 'print -u4' to communicate) for example.

Or you want Ksh to choose the FDs, then use variables (as you can
also generally do with non-coprocess related redirections) like

exec {IN}<&p {OUT}>&p

(with arbitrary variable names, here IN and OUT chosen, which looks
more sophisticated to me than 'COPROC[0]' and 'COPROC[1]'). And you
can use the variables then simply as you're used to from other cases

print -u $OUT
read -u $IN

This fits well in Ksh's redirection feature set. And I suppose Bash
does not support FD variables, since the 'COPROC' (or own variables)
in this specific ("hacky") context needs to be introduced? - Or am I
mistaken that this is a 'coproc'-specific hack? - Bash's construct
[to me] looks flange-mounted (hope that is the correct word to use).

This post should also explain why I think that your valuation that
in Ksh the feature "just grew" is not justified. Beyond the '|&' vs.
'coproc' reserved word; consistency with '|' and '&', redirection,
assigned FDs (if desired), consistent 'p' as read/print option and
as FD, all fits and allows for readable straightforward code in Ksh
that also doesn't leave me with questions.

BTW, co-processes were designed into the shell with Ksh88 already;
not much to "just grow" (as you insinuated). ;-)

Janis
Post by Kenny McCormack
[...]
Lem Novantotto
2024-10-25 11:15:49 UTC
Permalink
Post by Janis Papanagnou
what is the 'coproc' actually (beyond a "reserved word")?
Just to clarify it to myself: I'd say that 'coproc' is a reserved word
that introduces a command (simple or compound). The command introduced by
the reserverd word 'coproc' is called *coprocess*.
So coproc is a reserved word, a coprocess is a command... and to me
"coproc command" is a shell construct. Kind of.

And coproc is just the bash builtin way to realize a two-way fifo named
pipe, besides the command mkfifo.
Post by Janis Papanagnou
Then, depending on that; is the redirection part
of a special case here? And that's the reason why it's listed
explicitly? Note redirection is an orthogonal concept! Here too?
I think it's listed explicitly just to remark the order of creation: the
coproc two-way pipe is created *before* any other redirections.
Post by Janis Papanagnou
The
access to the FDs is implicitly defined by 'COPROC[0]' for "output" to
the process and 'COPROC[1]' for input to the process; is this coherent
with 'stdin'(0) and 'stdout'(1); this at least irritates me, it's not as
obvious as it could be.
I'd agree with you... and indeed I do! But since we have a two-way pipe,
the input of the coprocess is on the output of the executing shell, and
vice-versa. So maybe it's just a matter of... where do you look it from?
And if you look it from the executing shell... bah, dunno why it's the way
it is.
Post by Janis Papanagnou
Or you want Ksh to choose the FDs, then use variables (as you can
also generally do with non-coprocess related redirections) like
exec {IN}<&p {OUT}>&p
(with arbitrary variable names, here IN and OUT chosen, which looks
more sophisticated to me than 'COPROC[0]' and 'COPROC[1]').
Agreed. IMHO the capability to well differentiate names for input and
output is a plus.
--
Bye, Lem
Kenny McCormack
2024-10-25 12:54:56 UTC
Permalink
In article <vffoij$349j2$***@dont-email.me>,
Janis Papanagnou <janis_papanagnou+***@hotmail.com> wrote:
...
Post by Janis Papanagnou
This post should also explain why I think that your valuation that
in Ksh the feature "just grew" is not justified. Beyond the '|&' vs.
'coproc' reserved word; consistency with '|' and '&', redirection,
assigned FDs (if desired), consistent 'p' as read/print option and
as FD, all fits and allows for readable straightforward code in Ksh
that also doesn't leave me with questions.
I have no further comment on the issues you seem to have with the bash
implementation of coprocs. I think I've explained everything (more than
once) already in these threads.
Post by Janis Papanagnou
BTW, co-processes were designed into the shell with Ksh88 already;
not much to "just grow" (as you insinuated). ;-)
Don't be afraid. Sometimes people are attracted to "just grew". They like
and admire that organic look. I really do think that ksh's implementation
of coprocs has an "organic" look. As I said, bash's way seems much more
"designed".

P.S. To answer one question, "coproc" *is* a shell keyword in bash:

$ type coproc
coproc is a shell keyword
$

I think you will find that |& is also a keyword (in ksh), although I could
not get "type" (or anything similar) to confirm that suspicion.
--
There's nothing more American than demanding to carry an AR-15 to
"protect yourself" but refusing to wear a mask to protect everyone else.
Janis Papanagnou
2024-10-26 14:20:34 UTC
Permalink
Yes. - I just asked that to get a better feeling for the implications,
e.g. for the redirection question that arose in Bash context. (Lem had
addressed that question in his post; a possible reason why redirection
is explicitly listed in the Bash man page with 'coproc'.)
Post by Kenny McCormack
[...]
I think you will find that |& is also a keyword (in ksh),
In Ksh it's not that relevant "what it is" because it's (syntactically
more obvious) a command separation like ';', <NL>, or '&'.
It's documented, and you can also see that if you write (for example)
bc |& exec 3<&p 4>&p
instead of having the two commands each in an own line.

In Ksh we write 'cmd &|' for a co-process as we write 'cmd &' for a
background process.[*]
Post by Kenny McCormack
although I could
not get "type" (or anything similar) to confirm that suspicion.
I don't think it makes sense to use the 'type' command on syntactic
symbols like ';', '&', '|&', '|', etc. - It's anyway documented in the
man page:

A list is a sequence of one or more pipelines separated by ;, &, ⎪&,
&&, or ⎪⎪, and optionally terminated by ;, &, or ⎪&.


Janis
Kenny McCormack
2024-10-25 07:06:23 UTC
Permalink
Post by Lawrence D'Oliveiro
I meant that you can have several asynchroneous processes started which
are each connected to the same main shell session with pipes for
communicating. For that you have to redirect the default pipe channels
because there's of course just one option '-p' with the commands 'read'
and 'print' and you need some way to differentiate the various channels.
Bash does that in a nicer way.
Agreed.
--
The people who were, are, and always will be, wrong about everything, are still
calling *us* "libtards"...

(John Fugelsang)
Kenny McCormack
2024-10-25 07:05:44 UTC
Permalink
In article <vff6l8$31j2u$***@dont-email.me>,
Janis Papanagnou <janis_papanagnou+***@hotmail.com> wrote:
...
Post by Janis Papanagnou
Post by Kenny McCormack
No comment on the rest, other than to say that you seem to claim that ksh
does support multiple concurrent coprocs, which I think is wrong, [...]
I meant that you can have several asynchroneous processes started
which are each connected to the same main shell session with pipes
for communicating. For that you have to redirect the default pipe
channels because there's of course just one option '-p' with the
commands 'read' and 'print' and you need some way to differentiate
the various channels. I just hacked a sample to show what I mean...
OK. Got it. That's how you would do it in ksh.

I think I like the bash way better, but we are just discussing details at
this point.

(Nice example, BTW)
--
I've been watching cat videos on YouTube. More content and closer to
the truth than anything on Fox.
Christian Weisgerber
2024-10-11 23:25:39 UTC
Permalink
Post by Frank Winkler
$ uname -sr | ( read var3; echo $var3 )
Darwin 24.0.0
$
... but it still doesn't solve the issue that I need the result to be
visible in the parent shell.
read var3 <<EOF
$(uname -sr)
EOF
echo "$var3"
--
Christian "naddy" Weisgerber ***@mips.inka.de
Kenny McCormack
2024-11-15 10:58:37 UTC
Permalink
In article <***@helmutwaitzmann.news.arcor.de>,
Helmut Waitzmann <***@xoxy.net> wrote:
...
Post by Helmut Waitzmann
uname -sr | { read var3 ; echo $var3 ; }
More simply;

uname -sr
--
If Jeb is Charlie Brown kicking a football-pulled-away, Mitt is a '50s
housewife with a black eye who insists to her friends the roast wasn't
dry.
Lem Novantotto
2024-11-16 00:07:54 UTC
Permalink
Post by Kenny McCormack
...
Post by Helmut Waitzmann
uname -sr | { read var3 ; echo $var3 ; }
More simply;
uname -sr
LOL!
That was funny, thanks. :-D
--
Bye, Lem
Loading...