Binary Options Guide For Beginners & Experienced Traders ...
Trading Platforms Binary.com
Make Money With Binary Options Effortlessly - How We Trade
Binary Options Trading
Learn to **trade binary options** online. Binary options are based on a simple yes or no proposition – you need to decide if asset will go above a certain price at a certain time. We suggest trying out [Binary School](http://gobinary24.com/mobile-app) – a mobile app for learning and trade simulation.
It is quite difficult for a novice player today to understand the principles of binary options. At the initial stage, BOs were positioned as a banal “Up / Down” choice for profit. However, it soon became clear that simply clicking on the button that determines the direction of the movement of quotes is not enough to make money. It is also necessary to determine with accuracy whether your choice will be correct. https://vfxalert.com/en/?partner=8&utm_source=reddit.com https://preview.redd.it/vwp30so0t0i41.jpg?width=6000&format=pjpg&auto=webp&s=07681dbbed22b8d1d06193fc72388d713811dafc #optionstrading #BinaryOptionTrading #binaryoptionstutorial #whatisabinaryoptionsstrategy #howtolearntotradebinaryoptions #LearnHowToTradeBinaryOptions #OptionsTradingForBeginners #traderBinaryOptions #BinaryOptionsTrader
An introduction to Linux through Windows Subsystem for Linux
I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.
An introduction to Linux through Windows Subsystem for Linux
tl;dr skip to next section So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.
Dual-booting with Windows and a Linux distro
Will basically involve partitioning your drive and installing Linux from an external bootable USB through your computer's boot menu. You'll get the full Linux experience.
Lots of Linux flavors to choose from. For beginners, Ubuntu and Linux Mint are generally recommended. I have Ubuntu 18.04 LTS, I'd recommend Ubuntu 20.04 LTS since it's newer, but it's all up to you.
However, it can be a pain to constantly be switching between operating systems. Maybe you wanna make the full jump to Linux, maybe you don't.
Life pro tip: if you go down this route, disable Window 10's Fast Startup feature as it will get very screwy with a dual-boot. I've also included a helpful guide in Appendix B.
Using a virtual machine (VM) to run Linux
Involves downloading a VM, downloading a .iso image file of whatever operating system you'd like, and running on your local machine.
Devours RAM and is generally pretty slow, would not recommend.
Using terminal emulators
These provide commands and functionality similar to a Linux terminal, but are still running on Windows architecture.
These days, the most commonly-used Linux terminal is called bash. bash stands for Bourne Again Shell (no, Bourne is not a typo), and is likely what you'll be using as well.
Terminal emulators generally don't include a package manager, i.e. you can't download new bash programs, so pretty limited for general usage. BUT you can install a package manager externally, kind of hacky but can work.
Examples of terminal emulators include PuTTY, Git Bash, msys2 and mingw.
Using Windows Subsystem for Linux (either WSL 1 or WSL 2)
WSL provides a compatibility layer for running GNU/Linux programs natively on Windows 10. It has integration features certain Windows 10 development apps (notably Visual Studio Code) as well.
You've got two options, WSL 1 and WSL 2. WSL 2 was recently released and features a real Linux kernel, as opposed to an simulated kernel in WSL. This means WSL 2 offers significant performance advantages, but still lacks some of WSL 1's features.
WSL 1 is what I currently use, and thus what I'll be talking about in this guide. I'm not necessarily recommending it, frankly I regret not doing a dual-boot sooner and ditching Windows, but a dual-boot isn't for everyone and takes a lot of time you might not have right now.
Getting WSL initially setup is easy, but making it run smoothly requires some effort, and some features (like audio playback or displaying GUIs) require workarounds you can research if interested. WSL will also not work properly with low-level system tools.
With that out of the way, let's get started with setting up WSL 1 on your Windows 10 machine.
Setting up WSL
So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.
1. Installing WSL
Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.
2. Installing Ubuntu
Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.  Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did. Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you. Finally, to upgrade all your packages, type in
sudo apt-get update
And then
sudo apt-get upgrade
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.
3. Making things nice and crispy: an introduction to UNIX-based filesystems
tl;dr skip to the next section The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways). So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A. For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!
4. Changing your default home folder
Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL. Anyway, once you've typed that in, your shell should look something like this:  Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
Next, press esc to exit insert mode, then type in the following:
:wq
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file. Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...
5. Importing your configuration files into the new home directory
Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!
6. Tips & tricks
Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit). alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful. export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.
Appendix A: brief intro to top-level UNIX directories
tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
etc: no idea why it's called etc, but it contains system-wide configuration files
home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
lib: libraries used by the system
lib64 64-bit libraries used by the system
mnt: mount, where your drives are located
opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
run: directory for programs to store runtime information.
srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.
Hey all. I've been a Java/C#/Python dev for a number of years. I noticed Rust topping the StackOverflow most loved language list earlier this year, and I've been hearing good things about Rust's memory model and "free" concurrency for awhile. When it recently came time to rewrite one of my projects as a small webservice, it seemed like the perfect time to learn Rust. I've been at this for about a month and so far I'm not understanding the love at all. I haven't spent this much time fighting a language in awhile. I'll keep the frustration to myself, but I do have a number of critiques I wouldn't mind discussing. Perhaps my perspective as a beginner will be helpful to someone. Hopefully someone else has faced some of the same issues and can explain why the language is still worthwhile. Fwiw - I'm going to make a lot of comparisons to the languages I'm comfortable with. I'm not attempting to make a value comparison of the languages themselves, but simply comparing workflows I like with workflows I find frustrating or counterintuitive. Docs When I have a question about a language feature in C# or Python, I go look at the official language documentation. Python in particular does a really nice job of breaking down what a class is designed to do and how to do it. Rust's standard docs are little more than Javadocs with extremely minimal examples. There are more examples in the Rust Book, but these too are super simplified. Anything more significant requires research on third-party sites like StackOverflow, and Rust is too new to have a lot of content there yet. It took me a week and a half of fighting the borrow checker to realize that HashMap.get_mut() was not the correct way to get and modify a map entry whose value was a non-primitive object. Nothing in the official docs suggested this, and I was actually on the verge of quitting the language over this until someone linked Tour of Rust, which did have a useful map example, in a Reddit comment. (If any other poor soul stumbles across this - you need HashMap.entry().or_insert(), and you modify the resulting entry in place using *my_entry.value = whatever. The borrow checker doesn't allow getting the entry, modifying it, and putting it back in the map.) Pit of Success/Failure C# has the concept of a pit of success: the most natural thing to do should be the correct thing to do. It should be easy to succeed and hard to fail. Rust takes the opposite approach: every natural thing to do is a landmine. Option.unwrap() can and will terminate my program. String.len() sets me up for a crash when I try to do character processing because what I actually want is String.chars.count(). HashMap.get_mut() is only viable if I know ahead of time that the entry I want is already in the map, because HashMap.get_mut().unwrap_or() is a snake pit and simply calling get_mut() is apparently enough for the borrow checker to think the map is mutated, so reinserting the map entry afterward causes a borrow error. If-else statements aren't idiomatic. Neither is return. Language philosophy Python has the saying "we're all adults here." Nothing is truly private and devs are expected to be competent enough to know what they should and shouldn't modify. It's possible to monkey patch (overwrite) pretty much anything, including standard functions. The sky's the limit. C# has visibility modifiers and the concept of sealing classes to prevent further extension or modification. You can get away with a lot of stuff using inheritance or even extension methods to tack on functionality to existing classes, but if the original dev wanted something to be private, it's (almost) guaranteed to be. (Reflection is still a thing, it's just understood to be dangerous territory a la Python's monkey patching.) This is pretty much "we're all professionals here"; I'm trusted to do my job but I'm not trusted with the keys to the nukes. Rust doesn't let me so much as reference a variable twice in the same method. This is the functional equivalent of being put in a straitjacket because I can't be trusted to not hurt myself. It also means I can't do anything. The borrow checker This thing is legendary. I don't understand how it's smart enough to theoretically track data usage across threads, yet dumb enough to complain about variables which are only modified inside a single method. Worse still, it likes to complain about variables which aren't even modified. Here's a fun example. I do the same assignment twice (in a real-world context, there are operations that don't matter in between.) This is apparently illegal unless Rust can move the value on the right-hand side of the assignment, even though the second assignment is technically a no-op.
//let Demo be any struct that doesn't implement Copy. let mut demo_object: Option = None; let demo_object_2: Demo = Demo::new(1, 2, 3); demo_object = Some(demo_object_2); demo_object = Some(demo_object_2);
Querying an Option's inner value via .unwrap and querying it again via .is_none is also illegal, because .unwrap seems to move the value even if no mutations take place and the variable is immutable:
let demo_collection: Vec = Vec::::new(); let demo_object: Option = None; for collection_item in demo_collection { if demo_object.is_none() { } if collection_item.value1 > demo_object.unwrap().value1 { } }
And of course, the HashMap example I mentioned earlier, in which calling get_mut apparently counts as mutating the map, regardless of whether the map contains the key being queried or not:
let mut demo_collection: HashMap = HashMap::::new(); demo_collection.insert(1, Demo::new(1, 2, 3)); let mut demo_entry = demo_collection.get_mut(&57); let mut demo_value: &mut Demo; //we can't call .get_mut.unwrap_or, because we can't construct the default //value in-place. We'd have to return a reference to the newly constructed //default value, which would become invalid immediately. Instead we get to //do things the long way. let mut default_value: Demo = Demo::new(2, 4, 6); if demo_entry.is_some() { demo_value = demo_entry.unwrap(); } else { demo_value = &mut default_value; } demo_collection.insert(1, *demo_value);
None of this code is especially remarkable or dangerous, but the borrow checker seems absolutely determined to save me from myself. In a lot of cases, I end up writing code which is a lot more verbose than the equivalent Python or C# just trying to work around the borrow checker. This is rather tongue-in-cheek, because I understand the borrow checker is integral to what makes Rust tick, but I think I'd enjoy this language a lot more without it. Exceptions I can't emphasize this one enough, because it's terrifying. The language flat up encourages terminating the program in the event of some unexpected error happening, forcing me to predict every possible execution path ahead of time. There is no forgiveness in the form of try-catch. The best I get is Option or Result, and nobody is required to use them. This puts me at the mercy of every single crate developer for every single crate I'm forced to use. If even one of them decides a specific input should cause a panic, I have to sit and watch my program crash. Something like this came up in a Python program I was working on a few days ago - a web-facing third-party library didn't handle a web-related exception and it bubbled up to my program. I just added another except clause to the try-except I already had wrapped around that library call and that took care of the issue. In Rust, I'd have to find a whole new crate because I have no ability to stop this one from crashing everything around it. Pushing stuff outside the standard library Rust deliberately maintains a small standard library. The devs are concerned about the commitment of adding things that "must remain as-is until the end of time." This basically forces me into a world where I have to get 50 billion crates with different design philosophies and different ways of doing things to play nicely with each other. It forces me into a world where any one of those crates can and will be abandoned at a moment's notice; I'll probably have to find replacements for everything every few years. And it puts me at the mercy of whoever developed those crates, who has the language's blessing to terminate my program if they feel like it. Making more stuff standard would guarantee a consistent design philosophy, provide stronger assurance that things won't panic every three lines, and mean that yes, I can use that language feature as long as the language itself is around (assuming said feature doesn't get deprecated, but even then I'd have enough notice to find something else.) Testing is painful Tests are definitively second class citizens in Rust. Unit tests are expected to sit in the same file as the production code they're testing. What? There's no way to tag tests to run groups of tests later; tests can be run singly, using a wildcard match on the test function name, or can be ignored entirely using [ignore]. That's it. Language style This one's subjective. I expect to take some flak for this and that's okay.
Conditionals with two possible branches should use if-else. Conditionals of three or more branches can use switch statements. Rust tries to wedge match into everything. Options are a perfect example of this - either a thing has a value (is_some()) or it doesn't (is_none()) but examples in the Rust Book only use match.
Match syntax is virtually unreadable because the language encourages heavy match use (including nested matches) with large blocks of code and no language feature to separate different blocks. Something like C#'s break/case statements would be nice here - they signal the end of one case and start another. Requiring each match case to be a short, single line would also be good.
Allowing functions to return a value without using the keyword return is awful. It causes my IDE to perpetually freak out when I'm writing a method because it thinks the last line is a malformed return statement. It's harder to read than a return X statement would be. It's another example of the Pit of Failure concept from earlier - the natural thing to do (return X) is considered non-idiomatic and the super awkward thing to do (X) is considered idiomatic.
return if {} else {} is really bad for readability too. It's a lot simpler to put the return statement inside the if and else blocks, where you're actually returning a value.
We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 75b198869a3a117b13b9a77b700afe5cee54fd86244e56cb59151d545adbbdfd monero-android-armv7-v0.16.0.3.tar.bz2 b48918a167b0961cdca524fad5117247239d7e21a047dac4fc863253510ccea1 monero-android-armv8-v0.16.0.3.tar.bz2 727a1b23fbf517bf2f1878f582b3f5ae5c35681fcd37bb2560f2e8ea204196f3 monero-freebsd-x64-v0.16.0.3.tar.bz2 6df98716bb251257c3aab3cf1ab2a0e5b958ecf25dcf2e058498783a20a84988 monero-linux-armv7-v0.16.0.3.tar.bz2 6849446764e2a8528d172246c6b385495ac60fffc8d73b44b05b796d5724a926 monero-linux-armv8-v0.16.0.3.tar.bz2 cb67ad0bec9a342b0f0be3f1fdb4a2c8d57a914be25fc62ad432494779448cc3 monero-linux-x64-v0.16.0.3.tar.bz2 49aa85bb59336db2de357800bc796e9b7d94224d9c3ebbcd205a8eb2f49c3f79 monero-linux-x86-v0.16.0.3.tar.bz2 16a5b7d8dcdaff7d760c14e8563dd9220b2e0499c6d0d88b3e6493601f24660d monero-mac-x64-v0.16.0.3.tar.bz2 5d52712827d29440d53d521852c6af179872c5719d05fa8551503d124dec1f48 monero-win-x64-v0.16.0.3.zip ff094c5191b0253a557be5d6683fd99e1146bf4bcb99dc8824bd9a64f9293104 monero-win-x86-v0.16.0.3.zip # ## GUI 50fe1d2dae31deb1ee542a5c2165fc6d6c04b9a13bcafde8a75f23f23671d484 monero-gui-install-win-x64-v0.16.0.3.exe 20c03ddb1c82e1bcb73339ef22f409e5850a54042005c6e97e42400f56ab2505 monero-gui-linux-x64-v0.16.0.3.tar.bz2 574a84148ee6af7119fda6b9e2859e8e9028fe8a8eec4dfdd196aeade47e9c90 monero-gui-mac-x64-v0.16.0.3.dmg 371cb4de2c9ccb5ed99b2622068b6aeea5bdfc7b9805340ea7eb92e7c17f2478 monero-gui-win-x64-v0.16.0.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl81bL8ACgkQ8K9NRioL 35J+UA//bgY6Mhikh8Cji8i2bmGXEmGvvWMAHJiAtAG2lgW3BT9BHAFMfEpUP5rk svFNsUY/Uurtzxwc/myTPWLzvXVMHzaWJ/EMKV9/C3xrDzQxRnl/+HRS38aT/D+N gaDjchCfk05NHRIOWkO3+2Erpn3gYZ/VVacMo3KnXnQuMXvAkmT5vB7/3BoosOU+ B1Jg5vPZFCXyZmPiMQ/852Gxl5FWi0+zDptW0jrywaS471L8/ZnIzwfdLKgMO49p Fek1WUUy9emnnv66oITYOclOKoC8IjeL4E1UHSdTnmysYK0If0thq5w7wIkElDaV avtDlwqp+vtiwm2svXZ08rqakmvPw+uqlYKDSlH5lY9g0STl8v4F3/aIvvKs0bLr My2F6q9QeUnCZWgtkUKsBy3WhqJsJ7hhyYd+y+sBFIQH3UVNv5k8XqMIXKsrVgmn lRSolLmb1pivCEohIRXl4SgY9yzRnJT1OYHwgsNmEC5T9f019QjVPsDlGNwjqgqB S+Theb+pQzjOhqBziBkRUJqJbQTezHoMIq0xTn9j4VsvRObYNtkuuBQJv1wPRW72 SPJ53BLS3WkeKycbJw3TO9r4BQDPoKetYTE6JctRaG3pSG9VC4pcs2vrXRWmLhVX QUb0V9Kwl9unD5lnN17dXbaU3x9Dc2pF62ZAExgNYfuCV/pTJmc= =bbBm -----END PGP SIGNATURE-----
Upgrading (GUI)
Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary. In case you want to update manually, you ought to perform the following steps:
Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows: [1] On the second page of the wizard (first page is language selection) choose Open a wallet from file [2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux). Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.
Upgrading (CLI)
You ought to perform the following steps:
Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
Extract the new binaries to a new directory of your liking.
Copy over the wallet files from the old directory (i.e. the v0.15.x.x or v0.16.0.x directory).
Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.16.0.3, it will simply pick up where it left off.
Release notes (GUI)
macOS app is now notarized by Apple
CMake improvments
Add support for IPv6 remote nodes
Add command history to Logs page
Add "Donate to Monero" button
Indicate probability of finding a block on Mining page
In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here: https://www.getmonero.org/resources/user-guides/remote_node_gui.html
We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 8e3ce10997ab50eec2ec3959846d61b1eb3cb61b583c9f0f9f5cc06f63aaed14 monero-android-armv7-v0.16.0.1.tar.bz2 d9e885b3b896219580195fa4c9a462eeaf7e9f7a6c8fdfae209815682ab9ed8a monero-android-armv8-v0.16.0.1.tar.bz2 4f4a2c761b3255027697cd57455f5e8393d036f225f64f0e2eff73b82b393b50 monero-freebsd-x64-v0.16.0.1.tar.bz2 962f30701ef63a133a62ada24066a49a2211cd171111828e11f7028217a492ad monero-linux-armv7-v0.16.0.1.tar.bz2 83c21fe8bb5943c4a4c77af90980a9c3956eea96426b4dea89fe85792cc1f032 monero-linux-armv8-v0.16.0.1.tar.bz2 4615b9326b9f57565193f5bfe092c05f7609afdc37c76def81ee7d324cb07f35 monero-linux-x64-v0.16.0.1.tar.bz2 3e4524694a56404887f8d7fedc49d5e148cbf15498d3ee18e5df6338a86a4f68 monero-linux-x86-v0.16.0.1.tar.bz2 d226c704042ff4892a7a96bb508b80590a40173683101db6ad3a3a9e20604334 monero-mac-x64-v0.16.0.1.tar.bz2 851b57ec0783d191f0942232e431aedfbc2071125b1bd26af9356c7b357ab431 monero-win-x64-v0.16.0.1.zip e944d15b98fcf01e54badb9e2d22bae4cd8a28eda72c3504a8156ee30aac6b0f monero-win-x86-v0.16.0.1.zip # ## GUI d35c05856e669f1172207cbe742d90e6df56e477249b54b2691bfd5c5a1ca047 monero-gui-install-win-x64-v0.16.0.2.exe 9ff8c91268f8eb027bd26dcf53fda5e16cb482815a6d5b87921d96631a79f33f monero-gui-linux-x64-v0.16.0.2.tar.bz2 142a1e8e67d80ce2386057e69475aa97c58ced30f0ece3f4b9f5ea5b62e48419 monero-gui-mac-x64-v0.16.0.2.tar.bz2 6e0efb25d1f5c45a4527c66ad6f6c623c08590d7c21de5a611d8c2ff0e3fbb55 monero-gui-win-x64-v0.16.0.2.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl8JaBMACgkQ8K9NRioL 35IKbhAAnmfm/daG2K+llRBYmNkQczmVbivbu9JLDNnbYvGuTVH94PSFC/6K7nnE 8EkiLeIVtBBlyr4rK288xSJQt+BMVM93LtzHfA9bZUbZkjj2le+KN8BHcmgEImA8 Qm2OPgr7yrxvb3aD5nQUDoaeQSmnkLCpN2PLbNGymOH0+IVl1ZYjY7pUSsJZQGvC ErLxZSN5TWvX42LcpyBD3V7//GBOQ/gGpfB9fB0Q5LgXOCLlN2OuQJcYY5KV3H+X BPp9IKKJ0OUGGm0j7mi8OvHxTO4cbHjU8NdbtXy8OnPkXh24MEwACaG1HhiNc2xl LhzMSoMOnVbRkLLtIyfDC3+PqO/wSxVamphKplEncBXN28AakyFFYOWPlTudacyi SvudHJkRKdF0LVIjXOzxBoRBGUoJyyMssr1Xh67JA+E0fzY3Xm9zPPp7+Hp0Pe4H ZwT7WJAKoA6GqNpw7P6qg8vAImQQqoyMg51P9Gd+OGEo4DiA+Sn5r2YQcKY5PWix NlBTKq5JlVfRjE1v/8lUzbe+Hq10mbuxIqZaJ4HnWecifYDd0zmfQP1jt7xsTCK3 nxHb9Tl1jVdIuu2eCqGTG+8O9ofjVDz3+diz6SnpaSUjuws218QCZGPyYxe91Tz8 dCrf41FMHYhO+Lh/KHFt4yf4LKc0c048BoVUg6O0OhNIDTsvd/k= =akVA -----END PGP SIGNATURE-----
Upgrading
Note that, once the DNS records are upgraded, you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary. In case you want to update manually, you ought to perform the following steps:
Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows: [1] On the second page of the wizard (first page is language selection) choose Open a wallet from file [2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux). Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.
Release notes
Point release:
Fix bug that inhibited Ledger Monero users from properly sending transactions containing multiple inputs.
CMake improvements
Minor security relevant fixes
Various bug fixes
Major release:
Simple mode: node selction algorithm improved
UX: display estimated transaction fee
UX: add update dialog with download and verify functionality
UX: implement autosave feature
UI: redesign advanced options on transfer page
UI: improve daemon sync progress bar
UI: new language sidebar
UI: new processing splash design
UI: redesign settings page
Trezor: support new passphrase entry mechanism
Wizard: add support for seed offset
Dandelion++
Major Bulletproofs verification performance optimizations
In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here: https://www.getmonero.org/resources/user-guides/remote_node_gui.html
I really enjoyed m4nz's recent post: Getting into DevOps as a beginner is tricky - My 50 cents to help with it and wanted to do my own version of it, in hopes that it might help beginners as well. I agree with most of their advice and recommend folks check it out if you haven't yet, but I wanted to provide more of a simple list of things to learn and tools to use to compliment their solid advice.
Background
While I went to college and got a degree, it wasn't in computer science. I simply developed an interest in Linux and Free & Open Source Software as a hobby. I set up a home server and home theater PC before smart TV's and Roku were really a thing simply because I thought it was cool and interesting and enjoyed the novelty of it. Fast forward a few years and basically I was just tired of being poor lol. I had heard on the now defunct Linux Action Show podcast about linuxacademy.com and how people had had success with getting Linux jobs despite not having a degree by taking the courses there and acquiring certifications. I took a course, got the basic LPI Linux Essentials Certification, then got lucky by landing literally the first Linux job I applied for at a consulting firm as a junior sysadmin. Without a CS degree, any real experience, and 1 measly certification, I figured I had to level up my skills as quickly as possible and this is where I really started to get into DevOps tools and methodologies. I now have 5 years experience in the IT world, most of it doing DevOps/SRE work.
Certifications
People have varying opinions on the relevance and worth of certifications. If you already have a CS degree or experience then they're probably not needed unless their structure and challenge would be a good motivation for you to learn more. Without experience or a CS degree, you'll probably need a few to break into the IT world unless you know someone or have something else to prove your skills, like a github profile with lots of open source contributions, or a non-profit you built a website for or something like that. Regardless of their efficacy at judging a candidate's ability to actually do DevOps/sysadmin work, they can absolutely help you get hired in my experience. Right now, these are the certs I would recommend beginners pursue. You don't necessarily need all of them to get a job (I got started with just the first one on this list), and any real world experience you can get will be worth more than any number of certs imo (both in terms of knowledge gained and in increasing your prospects of getting hired), but this is a good starting place to help you plan out what certs you want to pursue. Some hiring managers and DevOps professionals don't care at all about certs, some folks will place way too much emphasis on them ... it all depends on the company and the person interviewing you. In my experience I feel that they absolutely helped me advance my career. If you feel you don't need them, that's cool too ... they're a lot of work so skip them if you can of course lol.
LPI Linux Essentials - basic multiple choice test on Linux basics. Fairly easy especially if you have nix experience, otherwise I'd recommend a taking a course like I did. linuxacademy worked for me, but there are other sites out there that can help. For this one, you can probably get by just searching youtube for the topics covered on the test.
Linux Foundation Certified System Administrator - This one is a hands on test which is great, you do a screen share with a proctor and ssh into their server; then you have a list of objectives to accomplish on the server pretty much however you see fit. Write a big bash script to do it all, do like 100 mv commands manually, write a small program in python lol, whatever you want so long as you accomplish the goals in time.
Amazon Web Services certs - I would go for the all 3 associate level certs if you can: Solutions Architect, SysOps Administrator, Developer. These are quite tedious to study for as they can be more a certification that you know which AWS products to get your client to use than they are a test of your cloud knowledge at times. For better or worse, AWS is the top cloud provider at the moment so showing you have knowledge there opens you up to the most jobs. If you know you want to work with another cloud provider then the Google certs can be swapped out here, for example. I know that with the AWS certs, I get offers all the time for companies that use GCP even though I have no real experience there. Folks with the google certs: is the reverse true for you? (genuinely asking, it would be useful for beginners to know).
Certified Kubernetes Administrator - I don't actually have this cert since at this point in my career I have real Kubernetes experience on my resume so it's kind of not needed, but if you wanted learn Kubernetes and prove it to prospective employers it can help.
Tools and Experimentation
While certs can help you get hired, they won't make you a good DevOps Engineer or Site Reliability Engineer. The only way to get good, just like with anything else, is to practice. There are a lot of sub-areas in the DevOps world to specialize in ... though in my experience, especially at smaller companies, you'll be asked to do a little (or a lot) of all of them. Though definitely not exhaustive, here's a list of tools you'll want to gain experience with both as points on a resume and as trusty tools in your tool belt you can call on to solve problems. While there is plenty of "resume driven development" in the DevOps world, these tools are solving real problems that people encounter and struggle with all the time, i.e., you're not just learning them because they are cool and flashy, but because not knowing and using them is a giant pain!
Linux! - Unless you want to only work with Windows for some reason, Linux is the most important thing you can learn to become a good DevOps professional in my view. Install it on your personal laptop, try a bunch of different distributions, develop an opinion on systemd vs. other init systems ;), get a few cloud servers on DigitalOcean or AWS to mess around with, set up a home server, try different desktop environments and window managers, master a cli text editor, break your install and try to fix it, customize your desktop until it's unrecognizable lol. Just get as much experience with Linux as possible!
git - Aside from general Linux knowledge, git is one of the most important tool for DevOps/SREs to know in my view. A good DevOps team will usually practice "git ops," i.e., making changes to your CI/CD pipeline, infrastructure, or server provisioning will involve making a pull request against the appropriate git repo.
terraform - terraform is the de facto "infrastructure as code" tool in the DevOps world. Personally, I love it despite it's pain points. It's a great place to start once you have a good Linux and cloud knowledge foundation as it will allow you to easily and quickly bring up infrastructure to practice with the other tools on this list.
packer - While not hugely popular or widely used, it's such a simple and useful tool that I recommend you check it out. Packer lets you build "immutable server images" with all of the tools and configuration you need baked in, so that your servers come online ready to start working immediately without any further provisioning needed. Combined with terraform, you can bring up Kubernetes clusters with a single command, or any other fancy DevOps tools you want to play with.
ansible - With the advent of Kubernetes and container orchestration, "configuration management" has become somewhat less relevant ... or at least less of a flashy and popular topic. It is still something you should be familiar with and it absolutely is in wide use at many companies. Personally, I love the combination of ansible + packer + terraform and find it very useful. Chef and Puppet are nice too, but Ansible is the most popular last I checked so unless you have a preference (or already know Ruby) then I'd go with that.
jenkins - despite it's many, many flaws and pain points lol, Jenkins is still incredibly useful and widely used as a CI/CD solution and it's fairly easy to get started with. EDIT: Upon further consideration, Jenkins may not be the best choice for beginners to learn. At this point, you’re probably better off with something like GitLab: it’s a more powerful and useful tool, you’ll learn YAML for its config, and it’s less of a pain to use. If you know Jenkins that’s great and it will help you get a job probably, but then you might implement Jenkins since it’s what you know ... but if you have the chance, choose another tool.
postgres - Knowledge of SQL databases is very useful, both from a DBA standpoint and the operations side of things. You might be helping developers develop a new service and helping with setting up schema (or doing so yourself for an internal tool), or you might be spinning up an instance for devs to access, or even pinpointing that a SQL query is the bottleneck in an app's performance. I put Postgres here because that's what I personally use and have seen a lot in the industry, but experience with any SQL database will be useful.
nginx - nginx is commonly used an http server for simple services or as an ingress option for kubernetes. Learn the basic config options, how to do TLS, etc.
docker - Ah, the buzzword of yesteryear. Docker and containerization is still incredibly dominant as a paradigm in the DevOps world right now and it is paramount that you learn it and master it. Be comfortable writing Dockerfiles, troubleshooting docker networking, the fundamentals of how linux containers work ... and definitely get familiar with Alpine Linux as it will most likely be the base image for most of your company's docker images.
kubernetes - At many companies, DevOps EngineeSite Reliability Engineer effectively translates to "Kubernetes Babysitter," especially if you're new on the job. Container orchestration, while no longer truly "cutting edge" is still fairly new and there is high demand for people with knowledge and experience with it. Work through Kubernetes The Hard Way to bring up a cluster manually. Learn and know the various "primitives" like pods and replicasets. Learn about ingress and how to expose services.
There are many, many other DevOps tools I left out that are worthwhile (I didn't even touch the tools in the kubernetes space like helm and spinnaker). Definitely don't stop at this list! A good DevOps engineer is always looking to add useful tools to their tool belt. This industry changes so quickly, it's hard to keep up. That's why it's important to also learn the "why" of each of these tools, so that you can determine which tool would best solve a particular problem. Nearly everything on this list could be swapped for another tool to accomplish the same goals. The ones I listed are simply the most common/popular and so are a good place to start for beginners.
Programming Languages
Any language you learn will be useful and make you a better sysadmin/DevOps Eng/SRE, but these are the 3 I would recommend that beginners target first.
Bash - It's right there in your terminal and for better or worse, a scarily large amount of the world's IT infrastructure depends on ill-conceived and poorly commented bash scripts. It's bash scripts all the way down. I joke, but bash is an incredibly powerful tool and a great place to start learning programming basics like control flow and variables.
Python - It has a beautiful syntax, it's easy to learn, and the python shell makes it quick to learn the basics. Many companies have large repos of python scripts used by operations for automating all sorts of things. Also, many older DevOps tools (like ansible) are written in python.
Go - Go makes for a great first "systems language" in that it's quite powerful and gives you access to some low level functionality, but the syntax is simple, explicit and easy to understand. It's also fast, compiles to static binaries, has a strong type system and it's easier to learn than C or C++ or Rust. Also, most modern DevOps tools are written in Go. If the documentation isn't answering your question and the logs aren't clear enough, nothing beats being able to go to the source code of a tool for troubleshooting.
Expanding your knowledge
As m4nz correctly pointed out in their post, while knowledge of and experience with popular DevOps tools is important; nothing beats in-depth knowledge of the underlying systems. The more you can learn about Linux, operating system design, distributed systems, git concepts, language design, networking (it's always DNS ;) the better. Yes, all the tools listed above are extremely useful and will help you do your job, but it helps to know why we use those tools in the first place. What problems are they solving? The solutions to many production problems have already been automated away for the most part: kubernetes will restart a failed service automatically, automated testing catches many common bugs, etc. ... but that means that sometimes the solution to the issue you're troubleshooting will be quite esoteric. Occam's razor still applies, and it's usually the simplest explanation that works; but sometimes the problem really is at the kernel level. The biggest innovations in the IT world are generally ones of abstractions: config management abstracts away tedious server provisioning, cloud providers abstract away the data center, containers abstract away the OS level, container orchestration abstracts away the node and cluster level, etc. Understanding what it happening beneath each layer of abstraction is crucial. It gives you a "big picture" of how everything fits together and why things are the way they are; and it allows you to place new tools and information into the big picture so you'll know why they'd be useful or whether or not they'd work for your company and team before you've even looked in-depth at them. Anyway, I hope that helps. I'll be happy to answer any beginnegetting started questions that folks have! I don't care to argue about this or that point in my post, but if you have a better suggestion or additional advice then please just add it here in the comments or in your own post! A good DevOps Eng/SRE freely shares their knowledge so that we can all improve.
I am here to tell you the difference between commonly used Minecraft Server Variations (Paper), Minecraft APIs (Bukkit), and server connectors (Bungee). Minecraft Servers:
Vanilla: Vanilla is the official server that is downloadable on Mojang’s website. This is officially supported and endorsed by Mojang. There are no modifications and is almost the same as a constantly running LAN game.
CraftBukkit: CraftBukkit is the original modified Minecraft server. It has the ability to run plugins and is a modification of Minecraft’s code. CraftBukkit was partially shut down due to copyright issues with Mojang. Now the only way to get it is through BuildTools by Spigot. {Uses Bukkit API}
Spigot: A fork of CraftBukkit. Spigot was created to do right to the things CraftBukkit did wrong. Spigot is one of the most popular servers to run. It has optimizations, more settings, and solved the legal issue that CraftBukkit had. This was done through BuildTools. It basically is the officials server but more complicated to that the binary is not the same. This means that you have to compile the server on your own, though it really only requires a few clicks. Spigot also has an extended API. This allows many more plugin options and more possibilities. {Uses Spigot API}
Paper: A fork of Spigot. Spigot is getting slower and people don’t like running BuildTools. There are also many unfixed exploits and bugs in the Spigot code. That is where Paper comes int. As a fork of Spigot, it can also run plugins. Paper also has it’s own extended API of Spigot API. This adds even more features. Paper is considered by many the best Minecraft server to run due to the fact that it is faster than Spigot, has more options than Spigot, and has many bug fixes that Spigot doesn’t have. Another plus of Paper is that there is no BuildTools. You just download and run the jar. They have already compiled the code for you meaning you just put it in a folder and run it. {Uses Paper API}
Forge: A server for Mods. Forge is a modded server. It also has a modded client. Plugins differ from mods because mods require client side modifications. Basically you can join a server with plugins from any Minecraft client but you can only join a Forge server from a modded client. The advantage of mods is that it completely changes Minecraft. Mods have the ability to edit practically every line of Minecraft’s code making it the most customizable out of all of the servers. However, it requires a modded client and requires stronger hardware. {Uses Forge}
Sponge: Sponge was made to be an optimized forge server. In other words what Sponge is to Forge is what Spigot was to CraftBukkit. Sponge uses forge in the server but provides many optimizations and bug fixes. The only downfall is that it lost support after 1.12 likely due to Minecraft practically being rewritten and the developers not being able to keep up {Uses Sponge API}
Next we have the API that allows plugins to run. Note: These won’t work on Vanilla and only work on certain servers:
Bukkit API: The original plugin API. Bukkit API is the base for most of the Modern APIs. Bukkit provides many features. If you are developing a plugin, you are likely using some version of this. It is integrated into CraftBukkit.
Spigot API: The popular fork of Bukkit API. Spigot API is a better version of Bukkit API and it has additional features. This is the most commonly used API. It is integrated into Spigot.
Paper API: A fork of Spigot API. Paper API is Spigot API with more classes. Paper API would be the best API but a majority of servers are still on Spigot. It is integrated into Paper.
Sponge API: Sponge API is a standalone API for developing plugins that work with Sponge. You will likely only use this if you have a Sponge server.
Forge: Forge isn’t exactly an API, but it is a platform for creating mods. Forge is used by Forge and is in Forge and Sponge.
Lastly, we have the Reverse Proxy Server Connectors. These allow you to create a network of multiple servers while it still looks like 1 server. Note: Most of these will work with almost any of the Server Variations.
BungeeCord: BungeeCord is one of the most popular Reverse Proxy Server Connectors. BungeeCord was created by and is maintained by the Spigot Team. BungeeCord has direct support with Spigot servers.
Waterfall: Waterfall is a fork of Bungee Cord. Waterfall provides more features and customization than BungeeCord which is why it is so popular. Waterfall was created by and is maintained by PaperMC. Waterfall is designed to work with Paper.
Others: There are many other forks of Spigot/Waterfall and I am too lazy to look them up. If you would like to please mention them and their features in the comments
That’s that. This is my beginners guide to Minecraft Server Variations, APIs, and Reverse Proxies. Finally, I know what you are thinking. WHERE IS FABRIC IN THERE! Well I don’t know or use fabric so I can’t talk much about it. There are many people more qualified that me to describe fabric. If you know fabric comment it and what it does so more people can understand it. Also, just to point out I spent 1.5 HOURS typing this so please don’t be rude if I messed up or forgot something.
Best A-book(non binary option) brokers for non-USA. citizens?
Hi, Im a beginner on trading and I want a good non-binary option broker to start my career with. Because binary option seem too much like gambling(espicially Iq option) to me so I am avoiding it(for now), the problem is that im not a USA nor EU citizen, so any recommendation? thanks!
Day 14: It's in the cards (Tarot and other Cartomancy) PART 1
I've devoted a day to tarot because, like meditation, it can be a great tool for anyone on any path, as well as non-witches. To keep this from being it's own book, I'm going to skip the history of tarot (which is interesting but easy to look up) and I'm not going to go in depth about any specific style. When I say the word tarot here, I mean specifically divination using a tarot deck with a total of 78 cards, including 22 major arcana (or "trump cards") and 56 minor arcana consisting of 4 suits (much like a deck of playing cards.) It is, however, perfectly fine to "misuse" the word tarot to mean any form of cartomancy that involves the use of "spreads." It's not going to offend anyone (except pedants) and it can be useful to add the word "tarot" to an inquiry or search, especially when looking for a deck online. There are three major variations of tarot deck- Rider-Waite, Tarot de Marseille, and Thoth. Most of the tarot decks available are based on the imagery of these three. There are differences, but generally they could be used interchangeably, so I'm not going to get too in depth about it. When you buy a deck, it can be helpful to know which style it is so you can be aware of how the creators meant them to be used. (You can usually find out online in stores or the official webpage, but you can also search for the differences and find out what style your deck matches. There are also versions that are unique to the deck/creator. As I said, tarot is for everyone. There is very low risk of cultural appropriation. (Decks are often themed and some decks can be considered cultural appropriation.) There are a surprising number of myths that are nothing more than gatekeeping bs. I'm going to get into it, but the first thing to know is that you do not need anything but a tarot deck to begin doing tarot. I've make a deck from a pack of index cards, it doesn't have to be an expensive deck or anything fancy. Which brings me to a strange myth I keep seeing lately in reference to tarot as well as other practices. Outside of specific religions, there is nothing in magic that MUST be given as a gift to "work." The best theory I've found to explain this floating garbage is that it was started as basically a "gift with membership" or "foot in the door" technique by the Hermetic Order of the Golden Dawn. There's no basis in it. Divination is about your ability to interpret symbols in context. It doesn't really matter where they came from. Another myth is that you can't let other people touch your cards. You might not want to LET people touch them, and that's okay. It's a good idea to have boundaries about your tools and possessions. But someone touching the cards won't break them, you don't need to burn them, and if you don't mind any energy that does affect them (like love, awe, or open-minded curiosity) you don't need to cleanse them. You might choose to cleanse them after they are touched if you don't like the energy that comes from the person or anytime you feel like you need to. Just like with anything that isn't yours, please do not just touch or grab someones tarot cards without asking. It is rude. Tarot does not require the use of elaborate spreads. They are one option of many. It's okay to use them like a "magic 8 ball" and ask yes or no questions and pull one card. You might get an answer that means "ask again later," or that indicate you should do a more elaborate spread, but you'll often get a yes or no. Because divination, carotmancy, and tarot are about interpreting specific symbols in a specific context- there aren't TRULY any beginner (or "beginner friendly") decks. It really doesn't matter if you have a single word, a detailed image, an emoji-style simple symbol, a color, or a paragraph- you still have to interpret it. You cannot get around doing the work. If you find yourself connecting to a deck that is considered "advanced," you will still be able to read with it. If you need to look up meanings every time you use it or write down your own meanings or you realize you've gone way off from any "usual" meanings, that's okay. If it works- it doesn't matter what "level" other people have assigned to it. There is no "One True Way" to shuffle, full, place, or read the cards of any deck. You do not even have to use all of the cards for a reading. It is okay to use only major or minor arcana if you want. I'll mention some quirks that can happen with decks in part 2. There are other common styles and decks designed for cartomancy. Lenormand cartomancy (often referred to as "Lenormand tarot") was named after a famous cartomancer from the 1800s who published several works related to cartomancy. The cards and system bearing her name were created after her death. This system uses 36 cards with images of animals, objects and people. It doesn't work very well for single card spreads or less linear spreads. Each card gives context to the next. Kipper cartomancy (also referred to as Kipper tarot) also uses 36 cards and creates very linear readings, and the imagery is not really meant to be interpreted as much as the reading is meant to basically be complete ideas. This deck is best for interpersonal affairs since the cards depict mainly people and actions. Oracle cards are other decks of varying styles with any number of cards. There is no standard oracle deck. Some have a lot of text, some have only a card title, some are only images. Like tarot decks. you can find an oracle deck that works for you. You can use tarot spreads with oracle cards without (extra) difficulty. You can do "yes/no" single card pulls with them. If you do not want to worry about 78 cards or you want a deck personalized to you and your needs without the baggage of generally accepted meanings, an oracle deck might work better than tarot for you. Playing cards are also a valid option. They can usually be obtained easily and inexpensively. The suits and numbers can correlate with tarot minor arcana or you might find regional meanings for playing card cartomancy. Playing cards are great for use in spells as well. If you do not own a tarot (or other cartomancy) deck but are interested in buying one, here is what I think you should consider:
First things first, if your intuition says "this one," get that one.
Think about your lifestyle and how you plan to use them. Do they need to stand up to heavy use? Do they need to be easily replaceable? Do they need to be easily portable? Would an app do the job? (I use 2 apps and a website. They have been great to help get used to the cards, learning what they mean both generally and to me, and build up a habit of practicing tarot. They are all free and while I was very skeptical at first, they have all been very accurate.)
Think about your values and which of those your deck represents. You can find something that fits almost any and all values. (There are compostable cards, decks by just about any demographic you can imagine, and decks with a specific point of view- such as non-binary, LGBTQ+, patriarchy smashing, famous figures, animals, etc.) Most decks are themed and there is a theme for everyone. If you love crystals but don't want to support mining industries- you can choose from several crystal themed tarot or oracle decks.
Think about what you like to look at. Love puppies? Get a puppy deck. Got a fandom? You can probably find a deck (or a few) based on that. Prefer something minimal and luxurious? Take your pick. Want something super traditional to get you in the mood- get it.
My first tarot deck ever was a very small "Tarot Nova" deck. It came in a very small box (about the size of a post-it note.) It was perfect for my needs which were mostly hanging out with the other witches and their pendulums at school. It was portable and very inexpensive at the book store checkout line in a rotating display of tiny novelty kits. I have so far lost all of my tarot cards to small children, complications from homelessness, and moving frequently as a younger adult. Now I am in a position where I don't have to think about those things. My next deck will represent my values of simplicity, intuition, and quality. My needs now are different from when I was 16 or 25. Your needs are different from 5 years ago and will be different again in 5 years. It's okay if your deck changes too. I am just going to touch on some common quirks that you might encounter.
Some decks either come with or develop their own personality. Some seem to give gentle, easy to understand readings, some seem to sass the reader. Some don't seem to like reading for other people, some repeat the answer to a question you didn't ask. These decks can be extra difficult to get used to, especially if you don't know that it's possible for a deck to have a personality.
Some decks/readers are prone to "jumpers." These are cards that jump or fall out out of the deck during the reading. It's between you and your intuition to decide whether to read these cards and how to interpret them based on how they jumped, what part of the reading or spread you were on when it jumped, etc.
Some decks/readers will end up with a "personal card," which is a card that ends up representing the reader more than any other meaning. Usually there will be some clue as to why in the imagery, sometimes not. It's another thing that can be very confusing if you don't know it's a possibility.
You might feel like a deck doesn't want to work with you and no amount of cleansing can save it. On the flip side, you might struggle with several decks until you meet "the one." Meditation, careful selection, even a test drive should help avoid this, but sometimes it happens anyway.
I really thought I'd be able to trim tarot down to a one day event, but even without talking about the history of tarot or even how-to tarot, I can't. So tomorrow will be cartomancy, continued. I'll talk about where the answers come from, what to do if your spread isn't giving you answers, and non-divination ways to use cards in witchcraft. So today, your optional assignment is to think of a deck you would like, then look to see if a version of it exists. It might help you find a deck you like, or just demonstrate how many options are out there. Then tell me
Do you already use tarot or other cards for divination? What are some things you've noticed or learned about your personal relationship with your cards?
What do you currently think you'd like to look for in a deck, if you plan to get one? (Remember that there are no wrong answers. If your main concern is that it be free- that's okay. If you just want something beautiful- that's okay. If you have a 12 point checklist, it might be a little restrictive but I'll assume you put thought into it, so that's okay too.) Or do you think you'll pass until/unless cards come into your life? Or do you think you'll pass on card divination altogether?
How are your other practices going? Are you meditating regularly, adding herbs to your BoS, are you still using your BoS? How are you feeling right now? Even if you don't want to comment and check in with me, check in with yourself and make a note of where you are in this moment.
We've been going through a lot of information very quickly, thank you all so much for sticking with me so far. If you've fallen behind, don't stress. Catch up when you can. See you tomorrow!
All information presented is copyrighted material, you may not reproduce any part in any way except as permitted by US Copyright law. For info about reproduction permission, DM me. My current goal is to turn this into a book, and perhaps repeat this type of "course" in the future. I truly believe there is no cost of admission to witchcraft and I will never ask you to buy anything (from me or otherwise.) If you wouldlikeand are comfortablyableto leave a tip, I do have CashApp, Venmo, and Paypal. (Starving artist is a lifestyle choice, but not-starving artist is great too. And no, I'm not actually starving, but I am looking at paying some money to get this project turned into a book and I've got my eye on this tarot deck...)
Binary Options are financial instruments that let you trade on various types of trading assets such as stocks, forex, cryptos, indices, and more.Our binary options guide section contains tips, educational information, explanation of basic concepts, and much more. The information that we share to you is a result of years of research and from trading experience of our expert analysts. Note! If you are new to binary options and different strategies please go to our strategy page where we cover the topic comprehensively!. If you’ve studied and understood my previous posts about the fundamentals of binary option FX trading and binary options indicators, you are now ready to trade for real.Here are 3 different strategies that I use, choose one based on your risk appetite. Binary options allow you to trade on a wide range of underlying markets. One of the advantages of trading binary options is that you are not buying or selling an actual asset, only a contract that determines how that asset performs over a period of time. This limits your risk and makes it easy for anyone to start trading. Trading binary options is designed to be easy to do. Your risk is limited to the amount you place on the trade. Your payoff is clearly stated before making the trade. If you win a binary options trade you win a fixed amount of cash. Since there are only two possibilities, that’s the origin of the name “binary options.” Pocket Option is a binary options brokerage that provides online trading of more than 100 different underlying assets. Pocket Option is one of the only sites that accept new traders from the United States and Europe. Established in 2017, Pocket Option is based in the Marshall Islands and is licensed by the IFMRRC (International Financial Market Relations Regulation Center).
How To Make $75 an Hour Online 2020 Nadex binary options ...
Binary options are form of options trading based on a yes or no question. Try binary options today at: https://www.wallstreetsurvivor.com You're either right... How To Make $75 an Hour Online 2020 Nadex binary options 2020 My #1 Income Earner 👉 👉http://bit.ly/BuildWealthAndCredit #1 Youtube Course Here (HIGHLY RECO... Are binary options a good idea? If you're thinking about trading binary options, watch this video first. Let's go through the truth about binary options. Is ... The road to success through trading IQ option Best Bot Reviews Iq Option 2020 ,We make videos using this softwhere bot which aims to make it easier for you t... Get one projectoption course for FREE when you open and fund your first tastyworks brokerage account with more than $2,000: https://www.projectoption.com/fre...