How to completely terminate the automatic SSH login on the Mac terminal?
This might seem like a duplicate question but its not. Every answer on stack overflow does not answer my problem yet. So I set up a SSH connection for accessing Hadoop on my mac but I only want to use the SSH connection when I use Hadoop. But whenever I quit the terminal and restart it, the automatic SSH login comes into action. How do I stop this?
Using command like
exit just stops the current connection but a new connection is established as soon as I open a new terminal.
See also questions close to this topic
MacOS webview keeps imposing applewebdata:// prefix
In the delegate methods decidePolicyForMIMEType and didStartProvisionalLoadForFrame, I am seeing URLs beginning with applewebdata:// which makes it impossible for me to filter out URLs that I don't want.
How can I tell WebView or WKWebView to stop using this URL scheme?
Open AirDrop programatically
Which path do I need to enter to open AirDrop in the Finder?
e.g. I can enter
~/to open my home folder, which path would point to AirDrop?
How to install specific version phalcon on mac
I want to install phalcon 3.2 for one of our project that is built using phalcon 3.2, I tried installing it through homebrew, but no taps were found. Moreover, I tried installing it through macport but no phalcon extension is generated in php extensions.
MAC PORTS command to install phalcon
sudo port install php55-phalcon sudo port install php56-phalcon
Apart from that when I install the latest version by cloning the cphalcon repo and the install it through terminal as prescribed in documentation it works fine and extension is created. But for our project we actually need to install phalcon 3.2. Any help will be appreciated.
Repo clone and then installing
git clone git://github.com/phalcon/cphalcon.git cd cphalcon/build sudo ./install
How can I write NULL value to parquet using org.apache.parquet.hadoop.ParquetWriter?
I have a tool that uses a org.apache.parquet.hadoop.ParquetWriter to convert CSV data files to parquet data files.
I can write basic primitive types just fine (INT32, DOUBLE, BINARY string).
I need to write NULL values, but I do not know how. I've tried simply writing
nullwith ParquetWriter, and it throws an exception.
How can I write NULL using org.apache.parquet.hadoop.ParquetWriter? Is there a nullable type?
The code I believe is self explanatory:
ArrayList<Type> fields = new ArrayList<>(); fields.add(new PrimitiveType(Type.Repetition.OPTIONAL, PrimitiveTypeName.INT32, "int32_col", null)); fields.add(new PrimitiveType(Type.Repetition.OPTIONAL, PrimitiveTypeName.DOUBLE, "double_col", null)); fields.add(new PrimitiveType(Type.Repetition.OPTIONAL, PrimitiveTypeName.BINARY, "string_col", null)); MessageType schema = new MessageType("input", fields); Configuration configuration = new Configuration(); configuration.setQuietMode(true); GroupWriteSupport.setSchema(schema, configuration); SimpleGroupFactory f = new SimpleGroupFactory(schema); ParquetWriter<Group> writer = new ParquetWriter<Group>( new Path("output.parquet"), new GroupWriteSupport(), CompressionCodecName.SNAPPY, ParquetWriter.DEFAULT_BLOCK_SIZE, ParquetWriter.DEFAULT_PAGE_SIZE, 1048576, true, false, ParquetProperties.WriterVersion.PARQUET_1_0, configuration ); // create row 1 with defined values Group group1 = f.newGroup(); Integer int1 = 100; Double double1 = 0.5; String string1 = "string-value"; group1.add(0, int1); group1.add(1, double1); group1.add(2, string1); writer.write(group1); // create row 2 with NULL values -- does not work! Group group2 = f.newGroup(); Integer int2 = null; Double double2 = null; String string2 = null; group2.add(0, int2); // <-- throws NullPointerException group2.add(1, double2); // <-- throws NullPointerException group2.add(2, string2); // <-- throws NullPointerException writer.write(group2); writer.close();
Error writing to OrcNewOutputFormat using MapR MultipleOutputs
We reading data from ORC files and writing it back to ORC and Parquet format using MultipleOutputs. Our job is Map only and does not have a reducer. We are getting following errors in some cases which fails the entire job. I think both the errors are related but not sure why those don't come for every job. Let me know if more information is required.
Error: java.lang.ArrayIndexOutOfBoundsException: 1000 at org.apache.orc.impl.writer.StringTreeWriter.writeBatch(StringTreeWriter.java:70) at org.apache.orc.impl.writer.StructTreeWriter.writeRootBatch(StructTreeWriter.java:56) at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:546) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:297) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:334) at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter.close(OrcNewOutputFormat.java:67) at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs$RecordWriterWithCounter.close(MultipleOutputs.java:375) at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574) Error: java.lang.NullPointerException at java.lang.System.arraycopy(Native Method) at org.apache.orc.impl.DynamicByteArray.add(DynamicByteArray.java:115) at org.apache.orc.impl.StringRedBlackTree.addNewKey(StringRedBlackTree.java:48) at org.apache.orc.impl.StringRedBlackTree.add(StringRedBlackTree.java:60) at org.apache.orc.impl.writer.StringTreeWriter.writeBatch(StringTreeWriter.java:70) at org.apache.orc.impl.writer.StructTreeWriter.writeRootBatch(StructTreeWriter.java:56) at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:546) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:297) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:334) at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter.close(OrcNewOutputFormat.java:67) at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs$RecordWriterWithCounter.close(MultipleOutputs.java:375) at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:574)
Convert SQL statment to mapreduce in java without any library
I have done part of the compiler for SQL and want to make it like Hadoop and I want to convert SQL queries to map-reduce without using Hadoop and its libraries in Java
for example the table users I have stored in more than one csv file in a folder on the computer and the query
select count(*) , username From users where birthdate > 1997 group by location
and join operator
PHP Output correct in terminal but not in browser
I just started working with shell_exec in php and stuck at this point. Below is my php script which runs in terminal correctly but not in browser.
<?php echo shell_exec("ssh -tq firstname.lastname@example.org \"whoami\""); ?>
And output in terminal is
$ php /var/www/html/monitor/ssh.php root
Interesting thing is just whoami works like a charm
<?php echo shell_exec("whoami"); ?>
any suggetion is appriciated. Thank you!
Autokill broken reverse ssh tunnels
I have 1 server which is behind a NAT and a firewall and I have another in another location that is accessible via a domain. The server behind the NAT and firewall is running on a cloud environment and is designed to be disposable ie if it breaks we can simply redeploy it with a single script, in this case, it is OpenStack using a heat template. When that server fires up it runs the following command to create a reverse SSH tunnel to the server outside the NAT and Firewall to allow us to connect via port 8080 on that server. The issue I am having is it seems if that OpenSSH tunnel gets broken (server goes down maybe) the tunnel remains, meaning when we re-deploy the heat template to launch the server again it will no longer be able to connect to that port unless I kill the ssh process on the server outside the NAT beforehand.
here is the command I am using currently to start the reverse tunnel:
sudo ssh -f -N -T -R 9090:localhost:80 email@example.com
SSH port forwarding to VPN
I have a problem with setting the ssh tunnel from the client to the application running on the VPN server. I would like redirect to an app runned directly on the VPN server after entering the localhost addres.
I use the network manager to connecting with VPN. Operating system: fedora 28
When i try use this command :
ssh -L 9001:VPN_APPLICATION_URL:80 localhost
i got error:
channel_setup_fwd_listener: cannot listen to port:443 Could not request local forwarding
I don't know if it's good way to do this, Im new with port forwarding
I'd like to know how to disable terminal window's resizing? I mean, so user cannot drag a side or corner of the terminal and it didn't change the resolution of the window.
Hope you all understand.
Displaying all .png images with display command in linux
To display an image in a folder, one uses display image.png
If I want to open all .png files in a subfolder, I tried display *.png, but did not work.
How could this be done in interactive shell?
I can't open terminal on ubuntu 16.04 after python3 installation
i have a big problem; i tried to install python 3 on ubuntu 16.04 but after installation my terminal does't open anymore. then i try to remove pytohn but the problem is still here, if i open xterm and i tried to launch gnome-terminal i have this error:
"bash: /usr/bin/gnome-terminal: /usr/bin/python3 :FIle o directory does't exist"
i think there is some problem with different version of phyton on my pc but i can't find a solution. Someone can help me ? thanks